Are you sure you want to delete this access key?
comments | description | keywords |
---|---|---|
true | Learn how to train YOLOv5 on your own custom datasets with easy-to-follow steps. Detailed guide on dataset preparation, model selection, and training process. | YOLOv5, custom dataset, model training, object detection, machine learning, AI, YOLO model, PyTorch, dataset preparation, Ultralytics |
📚 This guide explains how to train your own custom dataset using the YOLOv5 model 🚀. Training custom models is a fundamental step in tailoring computer vision solutions to specific real-world applications beyond generic object detection.
First, ensure you have the necessary environment set up. Clone the YOLOv5 repository and install the required dependencies from requirements.txt
. A Python>=3.8.0 environment with PyTorch>=1.8 is essential. Models and datasets are automatically downloaded from the latest YOLOv5 release if they are not found locally.
git clone https://github.com/ultralytics/yolov5 # Clone the repository
cd yolov5
pip install -r requirements.txt # Install dependencies
Developing a custom object detection model is an iterative process:
Ultralytics HUB offers a streamlined, no-code solution for this entire machine learning operations (MLOps) cycle, including dataset management, model training, and deployment.
!!! question "Licensing"
Ultralytics provides two licensing options to accommodate diverse usage scenarios:
- **AGPL-3.0 License**: This [OSI-approved](https://opensource.org/license/agpl-v3) open-source license is ideal for students, researchers, and enthusiasts passionate about open collaboration and knowledge sharing. It requires derived works to be shared under the same license. See the [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for full details.
- **Enterprise License**: Designed for commercial applications, this license permits the seamless integration of Ultralytics software and AI models into commercial products and services without the open-source stipulations of AGPL-3.0. If your project requires commercial deployment, request an [Enterprise License](https://www.ultralytics.com/license).
Explore our licensing options further on the [Ultralytics Licensing](https://www.ultralytics.com/license) page.
Before initiating the training, dataset preparation is essential.
YOLOv5 models require labeled data to learn the visual characteristics of object classes. Organizing your dataset correctly is key.
dataset.yaml
The dataset configuration file (e.g., coco128.yaml
) outlines the dataset's structure, class names, and paths to image directories. COCO128 serves as a small example dataset, comprising the first 128 images from the extensive COCO dataset. It's useful for quickly testing the training pipeline and diagnosing potential issues like overfitting.
The dataset.yaml
file structure includes:
path
: The root directory containing the dataset.train
, val
, test
: Relative paths from path
to directories containing images or text files listing image paths for training, validation, and testing sets.names
: A dictionary mapping class indices (starting from 0) to their corresponding class names.Below is the structure for coco128.yaml
(view on GitHub):
# Dataset root directory relative to the yolov5 directory
path: coco128
# Train/val/test sets: specify directories, *.txt files, or lists
train: images/train2017 # 128 images for training
val: images/train2017 # 128 images for validation
test: # Optional path to test images
# Classes (example using 80 COCO classes)
names:
0: person
1: bicycle
2: car
# ... (remaining COCO classes)
77: teddy bear
78: hair drier
79: toothbrush
While manual labeling using tools is a common approach, the process can be time-consuming. Recent advancements in foundation models offer possibilities for automating or semi-automating the annotation process, potentially speeding up dataset creation significantly. Here are a few examples of models that can assist with generating labels:
Using these models can provide a "pre-labeling" step, reducing the manual effort required. However, it's crucial to review and refine automatically generated labels to ensure accuracy and consistency, as the quality directly impacts the performance of your trained YOLOv5 model. After generating (and potentially refining) your labels, ensure they adhere to the YOLO format: one *.txt
file per image, with each line representing an object as class_index x_center y_center width height
(normalized coordinates, zero-indexed class). If an image has no objects of interest, no corresponding *.txt
file is needed.
The YOLO format *.txt
file specifications are precise:
class_index x_center y_center width height
.x_center
and width
by the image's total width, and divide y_center
and height
by the image's total height.0
, the second by 1
, and so forth).The label file corresponding to the image above, containing two 'person' objects (class index 0
) and one 'tie' object (class index 27
), would look like this:
Structure your datasets directory as illustrated below. By default, YOLOv5 anticipates the dataset directory (e.g., /coco128
) to reside within a /datasets
folder located adjacent to the /yolov5
repository directory.
YOLOv5 automatically locates the labels for each image by substituting the last instance of /images/
in the image path with /labels/
. For example:
../datasets/coco128/images/im0.jpg # Path to the image file
../datasets/coco128/labels/im0.txt # Path to the corresponding label file
The recommended directory structure is:
/datasets/
└── coco128/ # Dataset root
├── images/
│ ├── train2017/ # Training images
│ │ ├── 000000000009.jpg
│ │ └── ...
│ └── val2017/ # Validation images (optional if using same set for train/val)
│ └── ...
└── labels/
├── train2017/ # Training labels
│ ├── 000000000009.txt
│ └── ...
└── val2017/ # Validation labels (optional if using same set for train/val)
└── ...
Choose a pretrained model to initiate the training process. Starting with pretrained weights significantly accelerates learning and improves performance compared to training from scratch. YOLOv5 offers various model sizes, each balancing speed and accuracy differently. For example, YOLOv5s is the second-smallest and fastest model, suitable for resource-constrained environments. Consult the README table for a detailed comparison of all available models.
Begin the model training using the train.py
script. Essential arguments include:
--img
: Defines the input image size (e.g., --img 640
). Larger sizes generally yield better accuracy but require more GPU memory.--batch
: Determines the batch size (e.g., --batch 16
). Choose the largest size your GPU can handle.--epochs
: Specifies the total number of training epochs (e.g., --epochs 100
). One epoch represents a full pass over the entire training dataset.--data
: Path to your dataset.yaml
file (e.g., --data coco128.yaml
).--weights
: Path to the initial weights file. Using pretrained weights (e.g., --weights yolov5s.pt
) is highly recommended for faster convergence and superior results. To train from scratch (not advised unless you have a very large dataset and specific needs), use --weights '' --cfg yolov5s.yaml
.Pretrained weights are automatically downloaded from the latest YOLOv5 release if not found locally.
# Example: Train YOLOv5s on the COCO128 dataset for 3 epochs
python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt
!!! tip "Optimize Training Speed"
💡 Employ `--cache ram` or `--cache disk` to cache dataset images in [RAM](https://en.wikipedia.org/wiki/Random-access_memory) or local disk, respectively. This dramatically accelerates training, particularly when dataset I/O (Input/Output) operations are a bottleneck. Note that this requires substantial RAM or disk space.
!!! tip "Local Data Storage"
💡 Always train using datasets stored locally. Accessing data from network drives (like Google Drive) or remote storage can be significantly slower and impede training performance. Copying your dataset to a local SSD is often the best practice.
All training outputs, including weights and logs, are saved in the runs/train/
directory. Each training session creates a new subdirectory (e.g., runs/train/exp
, runs/train/exp2
, etc.). For an interactive, hands-on experience, explore the training section in our official tutorial notebooks:
YOLOv5 seamlessly integrates with various tools for visualizing training progress, evaluating results, and monitoring performance in real-time.
Comet is fully integrated for comprehensive experiment tracking. Visualize metrics live, save hyperparameters, manage datasets and model checkpoints, and analyze model predictions using interactive Comet Custom Panels.
Getting started is straightforward:
pip install comet_ml # 1. Install Comet library
export COMET_API_KEY=YOUR_API_KEY_HERE # 2. Set your Comet API key (create a free account at Comet.ml)
python train.py --img 640 --epochs 3 --data coco128.yaml --weights yolov5s.pt # 3. Train your model - Comet automatically logs everything!
Dive deeper into the supported features in our Comet Integration Guide. Learn more about Comet's capabilities from their official documentation. Try the Comet Colab Notebook for a live demo:
ClearML integration enables detailed experiment tracking, dataset version management, and even remote execution of training runs. Activate ClearML with these simple steps:
pip install clearml
clearml-init
once to connect to your ClearML server (either self-hosted or the free tier).ClearML automatically captures experiment details, model uploads, comparisons, uncommitted code changes, and installed packages, ensuring full reproducibility. You can easily schedule training tasks on remote agents and manage dataset versions using ClearML Data. Explore the ClearML Integration Guide for comprehensive details.
Training results are automatically logged using TensorBoard and saved as CSV files within the specific experiment directory (e.g., runs/train/exp
). Logged data includes:
The results.csv
file is updated after every epoch and is plotted as results.png
once training concludes. You can also plot any results.csv
file manually using the provided utility function:
from utils.plots import plot_results
# Plot results from a specific training run directory
plot_results("runs/train/exp/results.csv") # This will generate 'results.png' in the same directory
Upon successful completion of training, the best performing model checkpoint (best.pt
) is saved and ready for deployment or further refinement. Potential next steps include:
Ultralytics provides ready-to-use environments equipped with essential dependencies like CUDA, cuDNN, Python, and PyTorch, facilitating a smooth start.
This badge indicates that all YOLOv5 GitHub Actions Continuous Integration (CI) tests are passing successfully. These rigorous CI tests cover the core functionalities, including training, validation, inference, export, and benchmarks, across macOS, Windows, and Ubuntu operating systems. Tests are executed automatically every 24 hours and upon each code commit, ensuring consistent stability and optimal performance.
Training YOLOv5 on a custom dataset involves several key steps:
train/
and val/
(and optionally test/
) directories. Consider using models like Google Gemini, SAM2, or YOLOWorld to assist with or automate the labeling process (see Section 1.2).pip install -r requirements.txt
.
git clone https://github.com/ultralytics/yolov5
cd yolov5
pip install -r requirements.txt
dataset.yaml
file.train.py
script, providing paths to your dataset.yaml
, desired pretrained weights (e.g., yolov5s.pt
), image size, batch size, and the number of epochs.
python train.py --img 640 --batch 16 --epochs 100 --data path/to/your/dataset.yaml --weights yolov5s.pt
Ultralytics HUB is a comprehensive platform designed to streamline the entire YOLO model development lifecycle, often without needing to write any code. Key benefits include:
For a practical walkthrough, check out our blog post: How to Train Your Custom Models with Ultralytics HUB.
Whether you annotate manually or use automated tools (like those mentioned in Section 1.2), the final labels must be in the specific YOLO format required by YOLOv5:
.txt
file for each image. The filename should match the image filename (e.g., image1.jpg
corresponds to image1.txt
). Place these files in a labels/
directory parallel to your images/
directory (e.g., ../datasets/mydataset/labels/train/
)..txt
file represents one object annotation and follows the format: class_index center_x center_y width height
.center_x
, center_y
, width
, height
) must be normalized (values between 0.0 and 1.0) relative to the image's dimensions.0
, the second is 1
, etc.).Many manual annotation tools offer direct export to YOLO format. If using automated models, you will need scripts or processes to convert their output (e.g., bounding box coordinates, segmentation masks) into this specific normalized text format. Ensure your final dataset structure adheres to the example provided in the guide. For more details, see our Data Collection and Annotation Guide.
Ultralytics provides flexible licensing tailored to different needs:
Select the license that aligns best with your project's requirements and distribution model.
Press p or to see the previous file or, n or to see the next file
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?