Ultralytics:main
from
ultralytics:fix-example-links
This example demonstrates how to perform inference using Ultralytics YOLOv8 and YOLOv5 models in C++ leveraging the OpenCV DNN module.
Follow these steps to set up and run the C++ inference example:
# 1. Clone the Ultralytics repository
git clone https://github.com/ultralytics/ultralytics
cd ultralytics
# 2. Install Ultralytics Python package (needed for exporting models)
pip install .
# 3. Navigate to the C++ example directory
cd examples/YOLOv8-CPP-Inference
# 4. Export Models: Add yolov8*.onnx and/or yolov5*.onnx models (see export instructions below)
# Place the exported ONNX models in the current directory (YOLOv8-CPP-Inference).
# 5. Update Source Code: Edit main.cpp and set the 'projectBasePath' variable
# to the absolute path of the 'YOLOv8-CPP-Inference' directory on your system.
# Example: std::string projectBasePath = "/path/to/your/ultralytics/examples/YOLOv8-CPP-Inference";
# 6. Configure OpenCV DNN Backend (Optional - CUDA):
# - The default CMakeLists.txt attempts to use CUDA for GPU acceleration with OpenCV DNN.
# - If your OpenCV build doesn't support CUDA/cuDNN, or you want CPU inference,
# remove the CUDA-related lines from CMakeLists.txt.
# 7. Build the project
mkdir build
cd build
cmake ..
make
# 8. Run the inference executable
./Yolov8CPPInference
You need to export your trained PyTorch models to the ONNX format to use them with OpenCV DNN.
Exporting Ultralytics YOLOv8 Models:
Use the Ultralytics CLI to export. Ensure you specify the desired imgsz
and opset
. For compatibility with this example, opset=12
is recommended.
yolo export model=yolov8s.pt imgsz=640,480 format=onnx opset=12 # Example: 640x480 resolution
Exporting YOLOv5 Models:
Use the export.py
script from the YOLOv5 repository structure (included within the cloned ultralytics
repo).
# Assuming you are in the 'ultralytics' base directory after cloning
python export.py --weights yolov5s.pt --imgsz 640 480 --include onnx --opset 12 # Example: 640x480 resolution
Place the generated .onnx
files (e.g., yolov8s.onnx
, yolov5s.onnx
) into the ultralytics/examples/YOLOv8-CPP-Inference/
directory.
Example Output:
yolov8s.onnx:
yolov5s.onnx:
imgsz
exports.main
branch version includes a simple GUI wrapper using Qt. However, the core logic resides in the Inference
class (inference.h
, inference.cpp
).Inference
class demonstrates how to handle the output differences between YOLOv5 and YOLOv8 models, effectively transposing YOLOv8's output format to match the structure expected from YOLOv5 for consistent post-processing.Contributions are welcome! If you find any issues or have suggestions for improvement, please feel free to open an issue or submit a pull request. See our Contributing Guide for more details.
Press p or to see the previous file or, n or to see the next file