ONNX 模型库
返回模型

说明文档

用于检测安全帽和帽子的模型

<div align="center"> <img width="640" alt="luisarizmendi/hardhat-or-hat" src="images/example.png"> </div>

模型文件

您可以下载:

基础模型

Ultralytics/YOLO11m

Huggingface 页面

https://huggingface.co/luisarizmendi/hardhat-or-hat

模型数据集

https://universe.roboflow.com/luisarizmendi/hardhat-or-hat

标签

- hat
- helmet
- no_helmet

模型指标

<div align="center"> <img width="640" alt="luisarizmendi/hardhat-or-hat" src="v2/train-val/results.png"> </div>

<div align="center"> <img width="640" alt="luisarizmendi/hardhat-or-hat" src="v2/train-val/confusion_matrix_normalized.png"> </div>

模型训练

您可以在此查看 Jupyter notebook

超参数

base model: yolov11x.pt
epochs: 150
batch: 16
imgsz: 640
patience: 15
optimizer: 'SGD'
lr0: 0.001
lrf: 0.01
momentum: 0.9
weight_decay: 0.0005
warmup_epochs: 3
warmup_bias_lr: 0.01
warmup_momentum: 0.8

模型使用

使用 Huggingface Spaces

如果您不想在本地运行,可以使用我创建的这个 Huggingface Space,但请注意这会比较慢,因为我使用的是免费实例,所以最好使用下面的 Python 脚本在本地运行。

请确保检查模型 URL 指向您想要测试的模型。

<div align="center"> <img width="640" alt="luisarizmendi/hardhat-or-hat" src="images/spaces-example.png"> </div>

使用 Python 脚本

安装以下 PIP 依赖

gradio
ultralytics
Pillow
opencv-python
torch

然后运行下面的 Python 代码,并在浏览器中打开 http://localhost:8800 以上传和扫描图像。

import gradio as gr
from ultralytics import YOLO
from PIL import Image
import os
import cv2
import torch

DEFAULT_MODEL_URL = "https://huggingface.co/luisarizmendi/hardhat-or-hat/tree/main/v2/model/pytorch/best.pt"

def detect_objects_in_files(model_input, files):
    """
    Processes uploaded images for object detection.
    """
    if not files:
        return "No files uploaded.", []

    model = YOLO(str(model_input))
    if torch.cuda.is_available():
        model.to('cuda')
        print("Using GPU for inference")
    else:
        print("Using CPU for inference")

    results_images = []
    for file in files:
        try:
            image = Image.open(file).convert("RGB")
            results = model(image)
            result_img_bgr = results[0].plot()
            result_img_rgb = cv2.cvtColor(result_img_bgr, cv2.COLOR_BGR2RGB)
            results_images.append(result_img_rgb)

            # If you want that images appear one by one (slower)
            #yield "Processing image...", results_images

        except Exception as e:
            return f"Error processing file: {file}. Exception: {str(e)}", []

    del model
    torch.cuda.empty_cache()

    return "Processing completed.", results_images

interface = gr.Interface(
    fn=detect_objects_in_files,
    inputs=[
        gr.Textbox(value=DEFAULT_MODEL_URL, label="Model URL", placeholder="Enter the model URL"),
        gr.Files(file_types=["image"], label="Select Images"),
    ],
    outputs=[
        gr.Textbox(label="Status"),
        gr.Gallery(label="Results")
    ],
    title="Object Detection on Images",
    description="Upload images to perform object detection. The model will process each image and display the results."
)

if __name__ == "__main__":
    interface.launch()

luisarizmendi/hardhat-or-hat

作者 luisarizmendi

object-detection
↓ 0 ♥ 0

创建时间: 2024-11-28 11:38:32+00:00

更新时间: 2025-04-24 22:03:57+00:00

在 Hugging Face 上查看

文件 (86)

.gitattributes
.ipynb_checkpoints/train-checkpoint.ipynb
README.md
dev/object-detection-model-file/pytorch/Containerfile
dev/object-detection-model-file/pytorch/object-detection-pytorch.py
dev/object-detection-model-file/pytorch/requirements.txt
dev/prototyping.ipynb
images/example.png
images/spaces-example.png
tools/pytorch-to-onnx.py
v1/model/onnx/1/model.onnx ONNX
v1/model/pytorch/best.pt
v1/test/F1_curve.png
v1/test/PR_curve.png
v1/test/P_curve.png
v1/test/R_curve.png
v1/test/confusion_matrix.png
v1/test/confusion_matrix_normalized.png
v1/test/val_batch0_labels.jpg
v1/test/val_batch0_pred.jpg
v1/test/val_batch1_labels.jpg
v1/test/val_batch1_pred.jpg
v1/test/val_batch2_labels.jpg
v1/test/val_batch2_pred.jpg
v1/train-val/F1_curve.png
v1/train-val/PR_curve.png
v1/train-val/P_curve.png
v1/train-val/R_curve.png
v1/train-val/args.yaml
v1/train-val/confusion_matrix.png
v1/train-val/confusion_matrix_normalized.png
v1/train-val/events.out.tfevents.1738747289.yolo-training-pipeline-7vd94-system-container-impl-3115668440.58.0
v1/train-val/labels.jpg
v1/train-val/labels_correlogram.jpg
v1/train-val/results.csv
v1/train-val/results.png
v1/train-val/train_batch0.jpg
v1/train-val/train_batch1.jpg
v1/train-val/train_batch2.jpg
v1/train-val/train_batch27555.jpg
v1/train-val/train_batch27556.jpg
v1/train-val/train_batch27557.jpg
v1/train-val/val_batch0_labels.jpg
v1/train-val/val_batch0_pred.jpg
v1/train-val/val_batch1_labels.jpg
v1/train-val/val_batch1_pred.jpg
v1/train-val/val_batch2_labels.jpg
v1/train-val/val_batch2_pred.jpg
v2/model/onnx/1/model.onnx ONNX
v2/model/pytorch/best.pt
v2/test/F1_curve.png
v2/test/PR_curve.png
v2/test/P_curve.png
v2/test/R_curve.png
v2/test/confusion_matrix.png
v2/test/confusion_matrix_normalized.png
v2/test/val_batch0_labels.jpg
v2/test/val_batch0_pred.jpg
v2/test/val_batch1_labels.jpg
v2/test/val_batch1_pred.jpg
v2/test/val_batch2_labels.jpg
v2/test/val_batch2_pred.jpg
v2/train-val/F1_curve.png
v2/train-val/PR_curve.png
v2/train-val/P_curve.png
v2/train-val/R_curve.png
v2/train-val/args.yaml
v2/train-val/confusion_matrix.png
v2/train-val/confusion_matrix_normalized.png
v2/train-val/events.out.tfevents.1738747287.yolo-training-pipeline-b82kg-system-container-impl-2306871192.63.0
v2/train-val/labels.jpg
v2/train-val/labels_correlogram.jpg
v2/train-val/results.csv
v2/train-val/results.png
v2/train-val/train_batch0.jpg
v2/train-val/train_batch1.jpg
v2/train-val/train_batch2.jpg
v2/train-val/train_batch27555.jpg
v2/train-val/train_batch27556.jpg
v2/train-val/train_batch27557.jpg
v2/train-val/val_batch0_labels.jpg
v2/train-val/val_batch0_pred.jpg
v2/train-val/val_batch1_labels.jpg
v2/train-val/val_batch1_pred.jpg
v2/train-val/val_batch2_labels.jpg
v2/train-val/val_batch2_pred.jpg