Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

README.md 7.3 KB

You have to be logged in to leave a comment. Sign In

SuperGradients

Introduction

Welcome to SuperGradients, a free open-source training acceleration library for PyTorch-based deep learning models. SuperGradients allows you to train models on any computer vision tasks or import pre-trained SOTA models, such as object detection, classification of images, and semantic segmentation for videos or images use cases.

Whether you are a beginner or an expert it is likely that you already have your own training script, model, loss function implementation etc. In this notebook we present the modifications needed in order to launch your training so you can benefit from the various tools the SuperGradients has to offer:

  • Train models from any Computer Vision tasks or import pre-trained SOTA models (detection, segmentation, and classification - YOLOv5, DDRNet, EfficientNet, RegNet, ResNet, MobileNet, etc.)
  • Shorten the training process using tested and proven recipes & code examples
  • Easily configure your own or use plug&play training, dataset , and architecture parameters.
  • Save time and easily integrate it into your codebase.

Table of Content:

Getting Started

Quick Start Notebook

Get started with our quick start notebook on Google Colab for a quick and easy start using free GPU hardware

SuperGradients Quick Start in Google Colab Download notebook View source on GitHub


SuperGradients Walkthrough Notebook

Learn more about SuperGradients training components with our walkthrough notebook on Google Colab for an easy to use tutorial using free GPU hardware

SuperGradients Walkthrough in Google Colab Download notebook View source on GitHub


Installation Methods

Quick Installation of stable version

Not yet avilable in PyPi

  pip install super-gradients

That's it !

Installing from GitHub

pip install git+https://github.com/Deci-AI/super-gradients.git@stable

Computer Vision Models' Pretrained Checkpoints

Pretrained Classification PyTorch Checkpoints

** TODO - ADD HERE EFFICIENCY FRONTIER CLASSIFICATION MODELS GRAPH FOR LATENCY **
Model Dataset Resolution Top-1 Top-5 Latency b1T4 Throughout b1T4
EfficientNet B0 ImageNet 224x224 77.62 93.49 1.16ms 862fps
RegNetY200 ImageNet 224x224 70.88 89.35 - -
RegNetY400 ImageNet 224x224 74.74 91.46 - -
RegNetY600 ImageNet 224x224 76.18 92.34 - -
RegNetY800 ImageNet 224x224 77.07 93.26 - -
ResNet18 ImageNet 224x224 70.6 89.64 0.599ms 1669fps
ResNet34 ImageNet 224x224 74.13 91.7 0.89ms 1123fps
ResNet50 ImageNet 224x224 76.3 93.0 0.94ms 1063fps
MobileNetV3_large-150 epochs ImageNet 224x224 73.79 91.54 0.87ms 1149fps
MobileNetV3_large-300 epochs ImageNet 224x224 74.52 91.92 0.87ms 1149fps
MobileNetV3_small ImageNet 224x224 67.45 87.47 0.75ms 1333fps
MobileNetV2_w1 ImageNet 224x224 73.08 91.1 0.58ms 1724fps

Pretrained Object Detection PyTorch Checkpoints

** TODO - ADD HERE THE EFFICIENCY FRONTIER OBJECT-DETECTION MODELS GRAPH FOR LATENCY **
Model Dataset Resolution mAPval
0.5:0.95
Latency b1T4 Throughout b64T4
YOLOv5 small CoCo 640x640 37.3 10.09ms 101.85fps
YOLOv5 medium CoCo 640x640 45.2 17.55ms 57.66fps

Pretrained Semantic Segmentation PyTorch Checkpoints

** TODO - ADD HERE THE EFFICIENCY FRONTIER SEMANTIC-SEGMENTATION MODELS GRAPH FOR LATENCY **
Model Dataset Resolution mIoU LatencyT4 ThroughoutT4
DDRNet23 Cityscapes 1024x2048 78.65 - -
DDRNet23 slim Cityscapes 1024x2048 76.6 - -

Contributing

To learn about making a contribution to SuperGradients, please see our Contribution page.

Citation

If you use SuperGradients library or benchmark in your research, please cite SuperGradients deep learning training library.

Community

If you want to be a part of SuperGradients growing community, hear about all the exciting news and updates, need help, request for advanced features, or want to file a bug or issue report, we would love to welcome you aboard!

License

This project is released under the Apache 2.0 license.

Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...