Are you sure you want to delete this access key?
Build, train, and fine-tune production-ready deep learning SOTA vision models
Website • User Guide • Docs • Getting Started • Pretrained Models • Community • License • Deci Platform
# Load model with pretrained weights
model = models.get("yolox_s", pretrained_weights="coco")
All Computer Vision Models - Pretrained Checkpoints can be found here
Easily load and fine-tune production-ready, pre-trained SOTA models that incorporate best practices and validated hyper-parameters for achieving best-in-class accuracy. For more information on how to do it go to Getting Started
python -m super_gradients.train_from_recipe --config-name=imagenet_regnetY architecture=regnetY800 dataset_interface.data_dir=<YOUR_Imagenet_LOCAL_PATH> ckpt_root_dir=<CHEKPOINT_DIRECTORY>
More example on how and why to use recipes can be found in Recipes
All SuperGradients models’ are production ready in the sense that they are compatible with deployment tools such as TensorRT (Nvidia) and OpenVINO (Intel) and can be easily taken into production. With a few lines of code you can easily integrate the models into your codebase.
# Load model with pretrained weights
model = models.get("yolox_s", pretrained_weights="coco")
# Prepare model for conversion
# Input size is in format of [Batch x Channels x Width x Height] where 640 is the standart COCO dataset dimensions
model.eval()
model.prep_model_for_conversion(input_size=[1, 3, 640, 640])
# Create dummy_input
# Convert model to onnx
torch.onnx.export(model, dummy_input, "yolox_s.onnx")
More information on how to take your model to production can be found in Getting Started notebooks
pip install super-gradients
Check out SG full release notes.
The most simple and straightforward way to start training SOTA performance models with SuperGradients reproducible recipes. Just define your dataset path and where you want your checkpoints to be saved and you are good to go from your terminal!
python -m super_gradients.train_from_recipe --config-name=imagenet_regnetY architecture=regnetY800 dataset_interface.data_dir=<YOUR_Imagenet_LOCAL_PATH> ckpt_root_dir=<CHEKPOINT_DIRECTORY>
Want to try our pre-trained models on your machine? Import SuperGradients, initialize your Trainer, and load your desired architecture and pre-trained weights from our SOTA model zoo
# The pretrained_weights argument will load a pre-trained architecture on the provided dataset
import super_gradients
model = models.get("model-name", pretrained_weights="pretrained-model-name")
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Knowledge Distillation is a training technique that uses a large model, teacher model, to improve the performance of a smaller model, the student model. Learn more about SuperGradients knowledge distillation training with our pre-trained BEiT base teacher model and Resnet18 student model on CIFAR10 example notebook on Google Colab for an easy to use tutorial using free GPU hardware
![]() |
![]() |
To train a model, it is necessary to configure 4 main components. These components are aggregated into a single "main" recipe .yaml
file that inherits the aforementioned dataset, architecture, raining and checkpoint params. It is also possible (and recomended for flexibility) to override default settings with custom ones.
All recipes can be found here
![]() |
![]() |
from super_gradients import init_trainer
from super_gradients.common import MultiGPUMode
from super_gradients.training.utils.distributed_training_utils import setup_gpu_mode
# Initialize the environment
init_trainer()
# Launch DDP on 1 device (node) of 4 GPU's
setup_gpu_mode(gpu_mode=MultiGPUMode.DISTRIBUTED_DATA_PARALLEL, num_gpus=4)
# Define the objects
# The trainer will run on DDP without anything else to change
from super_gradients.training import models
# instantiate default pretrained resnet18
default_resnet18 = models.get(name="resnet18", num_classes=100, pretrained_weights="imagenet")
# instantiate pretrained resnet18, turning DropPath on with probability 0.5
droppath_resnet18 = models.get(name="resnet18", arch_params={"droppath_prob": 0.5}, num_classes=100, pretrained_weights="imagenet")
# instantiate pretrained resnet18, without classifier head. Output will be from the last stage before global pooling
backbone_resnet18 = models.get(name="resnet18", arch_params={"backbone_mode": True}, pretrained_weights="imagenet")
from super_gradients import Trainer
from torch.optim.lr_scheduler import ReduceLROnPlateau
from super_gradients.training.utils.callbacks import Phase, LRSchedulerCallback
from super_gradients.training.metrics.classification_metrics import Accuracy
# define PyTorch train and validation loaders and optimizer
# define what to be called in the callback
rop_lr_scheduler = ReduceLROnPlateau(optimizer, mode="max", patience=10, verbose=True)
# define phase callbacks, they will fire as defined in Phase
phase_callbacks = [LRSchedulerCallback(scheduler=rop_lr_scheduler,
phase=Phase.VALIDATION_EPOCH_END,
metric_name="Accuracy")]
# create a trainer object, look the declaration for more parameters
trainer = Trainer("experiment_name")
# define phase_callbacks as part of the training parameters
train_params = {"phase_callbacks": phase_callbacks}
from super_gradients import Trainer
# create a trainer object, look the declaration for more parameters
trainer = Trainer("experiment_name")
train_params = { ... # training parameters
"sg_logger": "wandb_sg_logger", # Weights&Biases Logger, see class WandBSGLogger for details
"sg_logger_params": # paramenters that will be passes to __init__ of the logger
{
"project_name": "project_name", # W&B project name
"save_checkpoints_remote": True
"save_tensorboard_remote": True
"save_logs_remote": True
}
}
pip install git+https://github.com/Deci-AI/super-gradients.git@stable
Detailed list can be found here
Check SuperGradients Docs for full documentation, user guide, and examples.
To learn about making a contribution to SuperGradients, please see our Contribution page.
Our awesome contributors:
Made with contrib.rocks.
If you are using SuperGradients library or benchmarks in your research, please cite SuperGradients deep learning training library.
If you want to be a part of SuperGradients growing community, hear about all the exciting news and updates, need help, request for advanced features, or want to file a bug or issue report, we would love to welcome you aboard!
Slack is the place to be and ask questions about SuperGradients and get support. Click here to join our Slack
To report a bug, file an issue on GitHub.
Join the SG Newsletter for staying up to date with new features and models, important announcements, and upcoming events.
For a short meeting with us, use this link and choose your preferred time.
This project is released under the Apache 2.0 license.
Deci Platform is our end to end platform for building, optimizing and deploying deep learning models to production.
Request free trial to enjoy immediate improvement in throughput, latency, memory footprint and model size.
Features:
Request free trial here
Press p or to see the previous file or, n or to see the next file
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?