Are you sure you want to delete this access key?
A real-time object detection and tracking UI built with Ultralytics YOLO11 and OpenCV, designed for interactive demos and seamless integration of tracking overlays. Whether you're just getting started with object tracking or looking to enhance it with additional features, this project provides a solid foundation.
https://github.com/user-attachments/assets/723e919e-555b-4cca-8e60-18e711d4f3b2
.pt
models (for GPU devices like NVIDIA Jetson or CUDA-enabled desktops).param + .bin
models (for CPU-only devices like Raspberry Pi or ARM boards)YOLO-Interactive-Tracking-UI/
├── interactive_tracker.py # Main Python tracking UI script
└── README.md # You're here!
Platform | Model Format | Example Model | GPU Acceleration | Notes |
---|---|---|---|---|
Raspberry Pi 4/5 | NCNN (.param/.bin) | yolov8n_ncnn_model |
❌ CPU only | Recommended format for Pi/ARM |
Jetson Nano | PyTorch (.pt) | yolov8n.pt |
✅ CUDA | Real-time performance possible |
Desktop w/ GPU | PyTorch (.pt) | yolov8s.pt |
✅ CUDA | Best performance |
CPU-only laptops | NCNN (.param/.bin) | yolov8n_ncnn_model |
❌ | Decent performance (~10–15 FPS) |
Note: Performance may vary based on the specific hardware, model complexity, and input resolution.
Install the core ultralytics
package:
pip install ultralytics
Tip: Use a virtual environment like
venv
orconda
(recommended) to manage dependencies.
GPU Support: Install PyTorch based on your system and CUDA version by following the official guide: https://pytorch.org/get-started/locally/
For pre-trained Ultralytics YOLO models (e.g., yolo11s.pt
or yolov8s.pt
), simply specify the model name in the script parameters (model_file
). These models will be automatically downloaded and cached. You can also manually download them from Ultralytics Assets Releases and place them in the project folder.
If you're using a custom-trained YOLO model, ensure the model file is in the project folder or provide its relative path.
For CPU-only devices, export your chosen model (e.g., yolov8n.pt
) to the NCNN format using the Ultralytics export
mode.
Supported Formats:
yolo11s.pt
(for GPU with PyTorch)yolov8n_ncnn_model
(directory containing .param
and .bin
files for CPU with NCNN)Edit the global parameters at the top of interactive_tracker.py
:
# --- Configuration ---
enable_gpu = False # Set True if running with CUDA and PyTorch model
model_file = "yolo11s.pt" # Path to model file (.pt for GPU, _ncnn_model dir for CPU)
show_fps = True # Display current FPS in the top-left corner
show_conf = False # Display confidence score for each detection
save_video = False # Set True to save the output video stream
video_output_path = "interactive_tracker_output.avi" # Output video file name
# --- Detection & Tracking Parameters ---
conf = 0.3 # Minimum confidence threshold for object detection
iou = 0.3 # IoU threshold for Non-Maximum Suppression (NMS)
max_det = 20 # Maximum number of objects to detect per frame
tracker = "bytetrack.yaml" # Tracker configuration: 'bytetrack.yaml' or 'botsort.yaml'
track_args = {
"persist": True, # Keep track history across frames
"verbose": False, # Suppress detailed tracker debug output
}
window_name = "Ultralytics YOLO Interactive Tracking" # Name for the OpenCV display window
# --- End Configuration ---
enable_gpu
: Set to True
if you have a CUDA-compatible GPU and are using a .pt
model. Keep False
for NCNN models or CPU-only execution.model_file
: Ensure this points to the correct model file or directory based on enable_gpu
.conf
: Adjust the confidence threshold. Lower values detect more objects but may increase false positives.iou
: Set the Intersection over Union (IoU) threshold for Non-Maximum Suppression (NMS). Higher values allow more overlapping boxes.tracker
: Choose between available tracker configuration files (ByteTrack, BoT-SORT).Execute the script from your terminal:
python interactive_tracker.py
c
key to cancel the current tracking and select a new object.q
key to quit the application.If you want to record the tracking session, enable the save_video
option in the configuration:
save_video = True # Enables video recording
video_output_path = "output.avi" # Customize your output file name (e.g., .mp4, .avi)
The video file will be saved in the project's working directory when you quit the application by pressing q
.
This project is released under the AGPL-3.0 license. For full licensing details, please refer to the Ultralytics Licensing page.
This software is provided "as is" for educational and demonstration purposes. Use it responsibly and at your own risk. The author assumes no liability for misuse or unintended consequences.
Contributions, feedback, and bug reports are welcome! Feel free to open an issue or submit a pull request on the original repository if you have improvements or suggestions.
Press p or to see the previous file or, n or to see the next file
Browsing data directories saved to S3 is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
ultralytics is now integrated with AWS S3!
Are you sure you want to delete this access key?
Browsing data directories saved to Google Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
ultralytics is now integrated with Google Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to Azure Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
ultralytics is now integrated with Azure Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to S3 compatible storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
ultralytics is now integrated with your S3 compatible storage!
Are you sure you want to delete this access key?