Are you sure you want to delete this access key?
This example provides a practical guide on performing inference with Ultralytics YOLOv8 models using C++, leveraging the capabilities of the ONNX Runtime and the OpenCV library. It's designed for developers looking to integrate YOLOv8 into C++ applications for efficient object detection.
Thanks to recent updates in Ultralytics, YOLOv8 models now include a Transpose
operation, aligning their output shape with YOLOv5. This allows the C++ code in this project to run inference seamlessly for YOLOv5, YOLOv7, and YOLOv8 models exported to the ONNX format.
You can export your trained Ultralytics YOLO models to the ONNX format required by this project. Use the Ultralytics export
mode for this.
from ultralytics import YOLO
# Load a YOLOv8 model (e.g., yolov8n.pt)
model = YOLO("yolov8n.pt")
# Export the model to ONNX format
# opset=12 is recommended for compatibility
# simplify=True optimizes the model graph
# dynamic=False ensures fixed input size, often better for C++ deployment
# imgsz=640 sets the input image size
model.export(format="onnx", opset=12, simplify=True, dynamic=False, imgsz=640)
print("Model exported successfully to yolov8n.onnx")
# Export the model using the command line
yolo export model=yolov8n.pt format=onnx opset=12 simplify=True dynamic=False imgsz=640
For more details on exporting models, refer to the Ultralytics Export documentation.
To potentially gain further performance on compatible hardware (like NVIDIA GPUs), you can convert the exported FP32 ONNX model to FP16.
import onnx
from onnxconverter_common import (
float16,
) # Ensure you have onnxconverter-common installed: pip install onnxconverter-common
# Load your FP32 ONNX model
fp32_model_path = "yolov8n.onnx"
model = onnx.load(fp32_model_path)
# Convert the model to FP16
model_fp16 = float16.convert_float_to_float16(model)
# Save the FP16 model
fp16_model_path = "yolov8n_fp16.onnx"
onnx.save(model_fp16, fp16_model_path)
print(f"Model converted and saved to {fp16_model_path}")
This example uses class names defined in a YAML file. You'll need the coco.yaml
file, which corresponds to the standard COCO dataset classes. Download it directly:
Save this file in the same directory where you plan to run the executable, or adjust the path in the C++ code accordingly.
Ensure you have the following dependencies installed:
Dependency | Version | Notes |
---|---|---|
ONNX Runtime | >=1.14.1 | Download pre-built binaries or build from source. Ensure GPU version if using CUDA. |
OpenCV | >=4.0.0 | Required for image loading and preprocessing. |
C++ Compiler | C++17 Support | Needed for features like <filesystem> . (GCC, Clang, MSVC) |
CMake | >=3.18 | Cross-platform build system generator. Version 3.18+ recommended for better CUDA support discovery. |
CUDA Toolkit (Optional) | >=11.4, <12.0 | Required for GPU acceleration via ONNX Runtime's CUDA Execution Provider. Must be CUDA 11.x. |
cuDNN (CUDA required) | =8.x | Required by CUDA Execution Provider. Must be cuDNN 8.x compatible with your CUDA 11.x version. |
Important Notes:
<filesystem>
library introduced in C++17 for path handling.Clone the Repository:
git clone https://github.com/ultralytics/ultralytics.git
cd ultralytics/examples/YOLOv8-ONNXRuntime-CPP
Create Build Directory:
mkdir build && cd build
Configure with CMake:
Run CMake to generate build files. You must specify the path to your ONNX Runtime installation directory using ONNXRUNTIME_ROOT
. Adjust the path according to where you downloaded or built ONNX Runtime.
# Example for Linux/macOS (adjust path as needed)
cmake .. -DONNXRUNTIME_ROOT=/path/to/onnxruntime
# Example for Windows (adjust path as needed, use backslashes or forward slashes)
cmake .. -DONNXRUNTIME_ROOT="C:/path/to/onnxruntime"
CMake Options:
-DONNXRUNTIME_ROOT=<path>
: (Required) Path to the extracted ONNX Runtime library.-DCMAKE_BUILD_TYPE=Release
: (Optional) Build in Release mode for optimizations.-DOpenCV_DIR=/path/to/opencv/build
.Build the Project: Use the build tool generated by CMake (e.g., Make, Ninja, Visual Studio).
# Using Make (common on Linux/macOS)
make
# Using CMake's generic build command (works with Make, Ninja, etc.)
cmake --build . --config Release
Locate Executable:
The compiled executable (e.g., yolov8_onnxruntime_cpp
) will be located in the build
directory.
Before running, ensure:
.onnx
model file (e.g., yolov8n.onnx
) is accessible.coco.yaml
file is accessible.Modify the main.cpp
file (or create a configuration mechanism) to set the parameters:
//change your param as you like
//Pay attention to your device and the onnx model type(fp32 or fp16)
DL_INIT_PARAM params;
params.rectConfidenceThreshold = 0.1;
params.iouThreshold = 0.5;
params.modelPath = "yolov8n.onnx";
params.imgSize = { 640, 640 };
params.cudaEnable = true;
params.modelType = YOLO_DETECT_V8;
yoloDetector->CreateSession(params);
Detector(yoloDetector);
Run the executable from the build
directory:
./yolov8_onnxruntime_cpp
Contributions are welcome! If you find any issues or have suggestions for improvements, please feel free to open an issue or submit a pull request on the main Ultralytics repository.
Press p or to see the previous file or, n or to see the next file
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?