Are you sure you want to delete this access key?
In this tutorial, we will go over all of the basic functionalities of SuperGradients very briefly. Go over the following sections to learn how to train, test and predict using SuperGradients. Check out our extended tutorials on the various features you can find in SuperGradients, and task-specific guides.
from super_gradients.common.object_names import Models
from super_gradients.training import Trainer, models
from super_gradients.training.metrics.classification_metrics import Accuracy, Top5
from super_gradients.training.dataloaders.dataloaders import cifar10_train, cifar10_val
from super_gradients.training.utils.distributed_training_utils import setup_device
init_trainer()
to initialize the super_gradients environment. This should be the first thing to be called by any code running super_gradients:init_trainer()
setup_device("cpu")
In case multiple GPUs are available, it is also possible to specify the number of GPUs to launch multi-gpu DDP training:
setup_device(num_gpus=4)
It is also possible to launch the training with whatever available hardware there is (i.e., if there are 4 GPUs available, we will launch a DDP test with four processes) by passing num_gpus=-1
:
setup_device(num_gpus=-1)
trainer = Trainer(experiment_name="my_cifar_experiment", ckpt_root_dir="/path/to/checkpoints_directory/")
model = models.get(Models.RESNET18, num_classes=10)
training_params = {
"max_epochs": 20,
"initial_lr": 0.1,
"loss": "CrossEntropyLoss",
"train_metrics_list": [Accuracy(), Top5()],
"valid_metrics_list": [Accuracy(), Top5()],
"metric_to_watch": "Accuracy",
"greater_metric_to_watch_is_better": True,
}
train_loader = cifar10_train()
valid_loader = cifar10_val()
trainer.train(model=model, training_params=training_params, train_loader=train_loader, valid_loader=valid_loader)
from super_gradients.common.object_names import Models
from super_gradients.training import Trainer, models
from super_gradients.training.metrics.classification_metrics import Accuracy, Top5
from super_gradients.training.dataloaders.dataloaders import cifar10_val
from super_gradients.training.utils.distributed_training_utils import setup_device
init_trainer()
to initialize the super_gradients environment. This should be the first thing to be called by any code running super_gradients:init_trainer()
setup_device("cpu")
In case multiple GPUs are available, it is also possible to specify the number of GPUs to launch a multi-gpu DDP test:
setup_device(num_gpus=4)
It is also possible to launch the test with whatever available hardware there is (i.e., if there are 4 GPUs available, we will launch a DDP test with four processes) by passing num_gpus=-1
:
setup_device(num_gpus=-1)
trainer = Trainer(experiment_name="test_my_cifar_experiment", ckpt_root_dir="/path/to/checkpoints_directory/")
model = models.get(Models.RESNET18, num_classes=10, checkpoint_path="/path/to/checkpoints_directory/my_cifar_experiment/ckpt_best.pth")
test_metrics = [Accuracy(), Top5()]
test_data_loader = cifar10_val()
test_results = trainer.test(model=model, test_loader=test_data_loader, test_metrics_list=test_metrics)
print(f"Test results: Accuracy: {test_results['Accuracy']}, Top5: {test_results['Top5']}")
from super_gradients.common.object_names import Models
from super_gradients.training import models
from super_gradients.training.metrics.classification_metrics import Accuracy, Top5
from super_gradients.training.dataloaders.dataloaders import cifar10_train, cifar10_val
from super_gradients import Trainer, init_trainer
init_trainer()
to initialize the super_gradients environment. This should be the first thing to be called by any code running super_gradients:init_trainer()
setup_device("cpu")
In case multiple GPUs are available, it is also possible to specify the number of GPUs to launch multi-gpu DDP finetuning/test:
setup_device(num_gpus=4)
It is also possible to launch the finetuning/test with whatever available hardware there is (i.e., if there are 4 GPUs available, a DDP finetuning/test with four processes will be launched) by passing num_gpus=-1
:
setup_device(num_gpus=-1)
model = models.get(Models.RESNET18, num_classes=10, pretrained_weights="imagenet")
Or use your local weights to instantiate a pre-trained model:
model = models.get(Models.RESNET18, num_classes=10, checkpoint_path="/path/to/imagenet_checkpoint.pth", checkpoint_num_classes=1000)
Finetune or test your pre-trained model as done in the previous sections.
from PIL import Image
import numpy as np
import requests
from super_gradients.training import models
from super_gradients.common.object_names import Models
import torchvision.transforms as T
import torch
from super_gradients.training.utils.distributed_training_utils import setup_device
init_trainer()
to initialize the super_gradients environment. This should be the first thing to be called by any code running super_gradients:init_trainer()
setup_device("cpu")
eval
mode:# Load the best model that we trained
best_model = models.get(Models.RESNET18, num_classes=10, checkpoint_path="/path/to/checkpoints_directory/my_cifar_experiment/ckpt_best.pth")
best_model.eval()
url = "https://www.aquariumofpacific.org/images/exhibits/Magnificent_Tree_Frog_900.jpg"
image = np.array(Image.open(requests.get(url, stream=True).raw))
transforms = T.Compose([
T.ToTensor(),
T.Normalize(mean=(0.4914, 0.4822, 0.4465), std=(0.2023, 0.1994, 0.2010)),
T.Resize((32, 32))
])
input_tensor = transforms(image).unsqueeze(0).to(next(best_model.parameters()).device)
predictions = best_model(input_tensor)
classes = train_dataloader.dataset.classes
plt.xlabel(classes[torch.argmax(predictions)])
plt.imshow(image)
Clone the SG repo:
git clone https://github.com/Deci-AI/super-gradients
Move to the root of the cloned project (where you find "requirements.txt" and "setup.py") and install super-gradients:
pip install -e .
Append super-gradients to the python path (Replace "YOUR-LOCAL-PATH" with the path to the downloaded repo) to avoid conflicts with any installed version of SG:
export PYTHONPATH=$PYTHONPATH:<YOUR-LOCAL-PATH>/super-gradients/
python -m super_gradients.train_from_recipe --config-name=cifar10_resnet experiment_name=my_resnet18_cifar10_experiment
Learn more in detail on how to launch, customize, and evaluate training recipes from our training with configuration files tutorial.
Press p or to see the previous file or, n or to see the next file
Browsing data directories saved to S3 is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
super-gradients is now integrated with AWS S3!
Are you sure you want to delete this access key?
Browsing data directories saved to Google Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
super-gradients is now integrated with Google Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to Azure Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
super-gradients is now integrated with Azure Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to S3 compatible storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
super-gradients is now integrated with your S3 compatible storage!
Are you sure you want to delete this access key?