Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  dvc git github
f3b51babb9
auto-stage with DVC
1 year ago
94b5322464
add assets
1 year ago
806070358e
fix typo
1 year ago
3dd63efce8
Remove trailing whitespace
1 year ago
ef58d61adb
add weights to DVC tracking
1 year ago
de55bf1b4e
update readme
1 year ago
823a099c10
[Feature:Quantization] support quantizing the model for TensorRT
1 year ago
ef58d61adb
add weights to DVC tracking
1 year ago
5e0c5b2d55
dvc init
1 year ago
ef58d61adb
add weights to DVC tracking
1 year ago
3dd63efce8
Remove trailing whitespace
1 year ago
3d5ee274ad
Initial commit
1 year ago
ca2b91154d
Merge branch 'meituan:main' into main
1 year ago
358ad801e0
add dvc to req.txt
1 year ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

MT-YOLOv6 About Naming YOLOv6

Introduction

YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance.

YOLOv6-nano achieves 35.0 mAP on COCO val2017 dataset with 1242 FPS on T4 using TensorRT FP16 for bs32 inference, and YOLOv6-s achieves 43.1 mAP on COCO val2017 dataset with 520 FPS on T4 using TensorRT FP16 for bs32 inference.

YOLOv6 is composed of the following methods:

  • Hardware-friendly Design for Backbone and Neck
  • Efficient Decoupled Head with SIoU Loss

Coming soon

  • YOLOv6 m/l/x model.
  • Deployment for MNN/TNN/NCNN/CoreML...
  • Quantization tools

Quick Start

Install

git clone https://github.com/meituan/YOLOv6
cd YOLOv6
pip install -r requirements.txt

Inference

First, download a pretrained model from the YOLOv6 release

Second, run inference with tools/infer.py

python tools/infer.py --weights yolov6s.pt --source img.jpg / imgdir
                                yolov6n.pt

Training

Single GPU

python tools/train.py --batch 32 --conf configs/yolov6s.py --data data/coco.yaml --device 0
                                         configs/yolov6n.py

Multi GPUs (DDP mode recommended)

python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 256 --conf configs/yolov6s.py --data data/coco.yaml --device 0,1,2,3,4,5,6,7
                                                                                        configs/yolov6n.py
  • conf: select config file to specify network/optimizer/hyperparameters
  • data: prepare COCO dataset and specify dataset paths in data.yaml

Evaluation

Reproduce mAP on COCO val2017 dataset

python tools/eval.py --data data/coco.yaml  --batch 32 --weights yolov6s.pt --task val
                                                                 yolov6n.pt

Deployment

Tutorials

Benchmark

Model Size mAPval
0.5:0.95
SpeedV100
fp16 b32
(ms)
SpeedV100
fp32 b32
(ms)
SpeedT4
trt fp16 b1
(fps)
SpeedT4
trt fp16 b32
(fps)
Params
(M)
Flops
(G)
YOLOv6-n 416
640
30.8
35.0
0.3
0.5
0.4
0.7
1100
788
2716
1242
4.3
4.3
4.7
11.1
YOLOv6-tiny 640 41.3 0.9 1.5 425 602 15.0 36.7
YOLOv6-s 640 43.1 1.0 1.7 373 520 17.2 44.2
  • Comparisons of the mAP and speed of different object detectors are tested on COCO val2017 dataset.
  • Refer to Test speed tutorial to reproduce the speed results of YOLOv6.
  • Params and Flops of YOLOv6 are estimated on deployed model.
  • Speed results of other methods are tested in our environment using official codebase and model if not found from the corresponding official release.
Tip!

Press p or to see the previous file or, n or to see the next file

About

YOLOv6: a single-stage object detection framework dedicated to industrial applications.

Collaborators 2

Comments

Loading...