Are you sure you want to delete this access key?
comments | description | keywords |
---|---|---|
true | Explore the Roboflow 100 dataset featuring 100 diverse datasets designed to test object detection models across various domains, from healthcare to video games. | Roboflow 100, Ultralytics, object detection, dataset, benchmarking, machine learning, computer vision, diverse datasets, model evaluation |
Roboflow 100, sponsored by Intel, is a groundbreaking object detection benchmark dataset. It includes 100 diverse datasets sampled from over 90,000 public datasets available on Roboflow Universe. This benchmark is specifically designed to test the adaptability of computer vision models, like Ultralytics YOLO models, to various domains, including healthcare, aerial imagery, and video games.
!!! question "Licensing"
Ultralytics offers two licensing options to accommodate different use cases:
- **AGPL-3.0 License**: This [OSI-approved](https://opensource.org/license) open-source license is ideal for students and enthusiasts, promoting open collaboration and knowledge sharing. See the [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for more details and visit our [AGPL-3.0 License page](https://www.ultralytics.com/legal/agpl-3-0-software-license).
- **Enterprise License**: Designed for commercial use, this license allows for the seamless integration of Ultralytics software and AI models into commercial products and services. If your scenario involves commercial applications, please reach out via [Ultralytics Licensing](https://www.ultralytics.com/license).
The Roboflow 100 dataset is organized into seven categories, each containing a unique collection of datasets, images, and classes:
This structure provides a diverse and extensive testing ground for object detection models, reflecting a wide array of real-world application scenarios found in various Ultralytics Solutions.
Dataset benchmarking involves evaluating the performance of machine learning models on specific datasets using standardized metrics. Common metrics include accuracy, mean Average Precision (mAP), and F1-score. You can learn more about these in our YOLO Performance Metrics guide.
!!! tip "Benchmarking Results"
Benchmarking results using the provided script will be stored in the `ultralytics-benchmarks/` directory, specifically in `evaluation.txt`.
!!! example "Benchmarking Example"
The following script demonstrates how to programmatically benchmark an Ultralytics YOLO model (e.g., YOLOv11n) on all 100 datasets within the Roboflow 100 benchmark using the `RF100Benchmark` class.
=== "Python"
```python
import os
import shutil
from pathlib import Path
from ultralytics.utils.benchmarks import RF100Benchmark
# Initialize RF100Benchmark and set API key
benchmark = RF100Benchmark()
benchmark.set_key(api_key="YOUR_ROBOFLOW_API_KEY")
# Parse dataset and define file paths
names, cfg_yamls = benchmark.parse_dataset()
val_log_file = Path("ultralytics-benchmarks") / "validation.txt"
eval_log_file = Path("ultralytics-benchmarks") / "evaluation.txt"
# Run benchmarks on each dataset in RF100
for ind, path in enumerate(cfg_yamls):
path = Path(path)
if path.exists():
# Fix YAML file and run training
benchmark.fix_yaml(str(path))
os.system(f"yolo detect train data={path} model=yolo11s.pt epochs=1 batch=16")
# Run validation and evaluate
os.system(f"yolo detect val data={path} model=runs/detect/train/weights/best.pt > {val_log_file} 2>&1")
benchmark.evaluate(str(path), str(val_log_file), str(eval_log_file), ind)
# Remove the 'runs' directory
runs_dir = Path.cwd() / "runs"
shutil.rmtree(runs_dir)
else:
print("YAML file path does not exist")
continue
print("RF100 Benchmarking completed!")
```
Roboflow 100 is invaluable for various applications related to computer vision and deep learning. Researchers and engineers can leverage this benchmark to:
For more ideas and inspiration on real-world applications, explore our guides on practical projects or check out Ultralytics HUB for streamlined model training and deployment.
The Roboflow 100 dataset, including metadata and download links, is available on the official Roboflow 100 GitHub repository. You can access and utilize the dataset directly from there for your benchmarking needs. The Ultralytics RF100Benchmark
utility simplifies the process of downloading and preparing these datasets for use with Ultralytics models.
Roboflow 100 consists of datasets with diverse images captured from various angles and domains. Below are examples of annotated images included in the RF100 benchmark, showcasing the variety of objects and scenes. Techniques like data augmentation can further enhance the diversity during training.
The diversity seen in the Roboflow 100 benchmark represents a significant advancement from traditional benchmarks, which often focus on optimizing a single metric within a limited domain. This comprehensive approach aids in developing more robust and versatile computer vision models capable of performing well across a multitude of different scenarios.
If you use the Roboflow 100 dataset in your research or development work, please cite the original paper:
!!! quote ""
=== "BibTeX"
```bibtex
@misc{rf100benchmark,
Author = {Floriana Ciaglia and Francesco Saverio Zuppichini and Paul Guerrie and Mark McQuade and Jacob Solawetz},
Title = {Roboflow 100: A Rich, Multi-Domain Object Detection Benchmark},
Year = {2022},
Eprint = {arXiv:2211.13523},
url = {https://arxiv.org/abs/2211.13523}
}
```
We extend our gratitude to the Roboflow team and all contributors for their significant efforts in creating and maintaining the Roboflow 100 dataset as a valuable resource for the computer vision community.
If you are interested in exploring more datasets to enhance your object detection and machine learning projects, feel free to visit our comprehensive dataset collection, which includes a variety of other detection datasets.
The Roboflow 100 dataset is a benchmark for object detection models. It comprises 100 diverse datasets sourced from Roboflow Universe, covering domains like healthcare, aerial imagery, and video games. Its significance lies in providing a standardized way to test model adaptability and robustness across a wide range of real-world scenarios, moving beyond traditional, often domain-limited, benchmarks.
The Roboflow 100 dataset spans seven diverse domains, offering unique challenges for object detection models:
This variety makes RF100 an excellent resource for assessing the generalizability of computer vision models.
When using the Roboflow 100 dataset, please cite the original paper to give credit to the creators. Here is the recommended BibTeX citation:
!!! quote ""
=== "BibTeX"
```bibtex
@misc{rf100benchmark,
Author = {Floriana Ciaglia and Francesco Saverio Zuppichini and Paul Guerrie and Mark McQuade and Jacob Solawetz},
Title = {Roboflow 100: A Rich, Multi-Domain Object Detection Benchmark},
Year = {2022},
Eprint = {arXiv:2211.13523},
url = {https://arxiv.org/abs/2211.13523}
}
```
For further exploration, consider visiting our comprehensive dataset collection or browsing other detection datasets compatible with Ultralytics models.
Press p or to see the previous file or, n or to see the next file
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?