Ultralytics:main
from
ultralytics:glenn-jocher-patch-1
comments | description | keywords |
---|---|---|
true | Discover Ultralytics YOLO - the latest in real-time object detection and image segmentation. Learn its features and maximize its potential in your projects. | Ultralytics, YOLO, YOLO11, object detection, image segmentation, deep learning, computer vision, AI, machine learning, documentation, tutorial |
Introducing Ultralytics YOLO11, the latest version of the acclaimed real-time object detection and image segmentation model. YOLO11 is built on cutting-edge advancements in deep learning and computer vision, offering unparalleled performance in terms of speed and accuracy. Its streamlined design makes it suitable for various applications and easily adaptable to different hardware platforms, from edge devices to cloud APIs.
Explore the Ultralytics Docs, a comprehensive resource designed to help you understand and utilize its features and capabilities. Whether you are a seasoned machine learning practitioner or new to the field, this hub aims to maximize YOLO's potential in your projects.
:material-clock-fast:{ .lg .middle } Getting Started
Install ultralytics
with pip and get up and running in minutes to train a YOLO model
:material-image:{ .lg .middle } Predict
Predict on new images, videos and streams with YOLO
:fontawesome-solid-brain:{ .lg .middle } Train a Model
Train a new YOLO model on your own custom dataset from scratch or load and train on a pretrained model
:material-magnify-expand:{ .lg .middle } Explore Computer Vision Tasks
Discover YOLO tasks like detect, segment, classify, pose, OBB and track
:rocket:{ .lg .middle } Explore YOLO11 NEW
Discover Ultralytics' latest state-of-the-art YOLO11 models and their capabilities
:material-scale-balance:{ .lg .middle } Open Source, AGPL-3.0
Ultralytics offers two YOLO licenses: AGPL-3.0 and Enterprise. Explore YOLO on GitHub.
Watch: How to Train a YOLO11 model on Your Custom Dataset in Google Colab.
YOLO (You Only Look Once), a popular object detection and image segmentation model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. Launched in 2015, YOLO gained popularity for its high speed and accuracy.
Ultralytics offers two licensing options to accommodate diverse use cases:
Our licensing strategy is designed to ensure that any improvements to our open-source projects are returned to the community. We hold the principles of open source close to our hearts ❤️, and our mission is to guarantee that our contributions can be utilized and expanded upon in ways that are beneficial to all.
Object detection has evolved significantly over the years, from traditional computer vision techniques to advanced deep learning models. The YOLO family of models has been at the forefront of this evolution, consistently pushing the boundaries of what's possible in real-time object detection.
YOLO's unique approach treats object detection as a single regression problem, predicting bounding boxes and class probabilities directly from full images in one evaluation. This revolutionary method has made YOLO models significantly faster than previous two-stage detectors while maintaining high accuracy.
With each new version, YOLO has introduced architectural improvements and innovative techniques that have enhanced performance across various metrics. YOLO11 continues this tradition by incorporating the latest advancements in computer vision research, offering even better speed-accuracy trade-offs for real-world applications.
Ultralytics YOLO is the latest advancement in the acclaimed YOLO (You Only Look Once) series for real-time object detection and image segmentation. It builds on previous versions by introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLO supports various vision AI tasks such as detection, segmentation, pose estimation, tracking, and classification. Its state-of-the-art architecture ensures superior speed and accuracy, making it suitable for diverse applications, including edge devices and cloud APIs.
Getting started with YOLO is quick and straightforward. You can install the Ultralytics package using pip and get up and running in minutes. Here's a basic installation command:
!!! example "Installation using pip"
=== "CLI"
```bash
pip install ultralytics
```
For a comprehensive step-by-step guide, visit our Quickstart page. This resource will help you with installation instructions, initial setup, and running your first model.
Training a custom YOLO model on your dataset involves a few detailed steps:
yolo TASK train
command to start training. (Each TASK
has its own argument)Here's example code for the Object Detection Task:
!!! example "Train Example for Object Detection Task"
=== "Python"
```python
from ultralytics import YOLO
# Load a pre-trained YOLO model (you can choose n, s, m, l, or x versions)
model = YOLO("yolo11n.pt")
# Start training on your custom dataset
model.train(data="path/to/dataset.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Train a YOLO model from the command line
yolo detect train data=path/to/dataset.yaml epochs=100 imgsz=640
```
For a detailed walkthrough, check out our Train a Model guide, which includes examples and tips for optimizing your training process.
Ultralytics offers two licensing options for YOLO:
For more details, visit our Licensing page.
Ultralytics YOLO supports efficient and customizable multi-object tracking. To utilize tracking capabilities, you can use the yolo track
command, as shown below:
!!! example "Example for Object Tracking on a Video"
=== "Python"
```python
from ultralytics import YOLO
# Load a pre-trained YOLO model
model = YOLO("yolo11n.pt")
# Start tracking objects in a video
# You can also use live video streams or webcam input
model.track(source="path/to/video.mp4")
```
=== "CLI"
```bash
# Perform object tracking on a video from the command line
# You can specify different sources like webcam (0) or RTSP streams
yolo track source=path/to/video.mp4
```
For a detailed guide on setting up and running object tracking, check our Track Mode documentation, which explains the configuration and practical applications in real-time scenarios.
Press p or to see the previous file or, n or to see the next file