Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

yolo_nas.md 6.1 KB

You have to be logged in to leave a comment. Sign In
title description
Log and share YOLO-NAS experiments with DagsHub Log YOLO-NAS experiments and trained models to DagsHub for diffing experiments, easy collaboration, versioning with DVC, and sharing.

YOLO-NAS

YOLO-NAS is a new object detection model developed by the researchers at Deci, and was open-sourced under the Super Gradient Python package.

With the new integration between DagsHub and Super Gradient, you can now log experiments and trained models to your remote MLflow server hosted on DagsHub, diff experiments, and share them with your friends and colleagues. If you’re using DVC, this integration enables you to version the trained model with DVC and push it to your DagsHub remote storage.

All these are encapsulated under the DagsHub Logger and only require adding a few lines of code to your project to unlock a world of new capabilities.

Open in Colab

How does YOLO-NAS work with DagsHub?

By setting DagsHub as the logger for experiments, you can authenticate your DagsHub account to host the results of your experiments. This process uses MLflow and the DagsHub Client to log metrics and parameters for each run. We use the built-in super-gradient callbacks to log parameters and artifacts, such as data and trained models, using either MLflow or DVC.

How to log experiments with YOLO-NAS and DagsHub?

DagsHub logger is supported as a callback in SuperGradients and can be used by setting a few parameters training_param of the trainer module.

Install DagsHub

  • Install DagsHub by running the following command from your CLI

    === "Mac-os, Linux, Windows" bash pip install dagshub

Set DagsHub as the trainer's logger:

  • In you Python script, add the following parameter to YOLO-NAS configurations

    training_params["sg_logger"] = "dagshub_sg_logger"
    

    By setting the "sg_logger" parameter's value to dagshub_sg_logger, supergradient reroutes the experiments to DagsHub, which has a remote ML tracking server configured for each repository. Additionally, by providing the repo_owner/repo_name, DagsHub sets the specified repository as the official host of experiments.

    In the event that the repository does not exist, it will be created automatically on your behalf.

    On top of that, you can also set the logger to version the trained model using DVC by setting the log_mlflow_only parameter to False. If you’d like to use it as part of an automated pipeline, you can set the dagshub_auth which resolves the authentication process.

    ??? Info "Additional parameters you can set up"

      | Parameter Name | Parameter Description |
      | --- | --- |
      | dagshub_repository | Format: <dagshub_username>/<dagshub_reponame> Your DagsHub project name, consisting of the owner name, followed by '/', and the repo name. If this is left empty, you'll be prompted in your run to fill it in manually. |
      | experiment_name | Name used for logging and loading purposes |
      | storage_location | If set to 's3' (i.e. s3://my-bucket) saves the Checkpoints in AWS S3 otherwise saves the Checkpoints Locally |
      | resumed | If true, then old tensorboard files will NOT be deleted when tb_files_user_prompt=True |
      | training_params | training_params for the experiment. |
      | checkpoints_dir_path | Local root directory path where all experiment logging directories will reside. |
      | tb_files_user_prompt |  Asks user for Tensorboard deletion prompt. |
      | launch_tensorboard | Whether to launch a TensorBoard process. |
      | tensorboard_port  | Specific port number for the tensorboard to use when launched (when set to None, some free port number will be used |
      | save_checkpoints_remote | Saves checkpoints in s3 and DagsHub. |
      | save_tensorboard_remote | Saves tensorboard in s3. |
      | save_logs_remote | Saves log files in s3 and DagsHub. |
      | monitor_system |  Save the system statistics (GPU utilization, CPU, ...) in the tensorboard |
      | dagshub_repository |  Format: <dagshub_username>/<dagshub_reponame> format is set correctly to avoid any potential issues. If you are utilizing the dagshub_sg_logger, please specify the dagshub_repository in sg_logger_params to prevent any interruptions from prompts during automated pipelines. In the event that the repository does not exist, it will be created automatically on your behalf. |
      | log_mlflow_only | Skip logging to DVC, use MLflow for all artifacts being logged |
    
      **Note**: you can find all the parameters in the [source code](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/common/sg_loggers/dagshub_sg_logger.py#L53)
    

Great job! The integration has been successfully finished. SuperGradients will automatically recognize the activation of DagsHub integration and include our hook in your pipeline. Consequently, every run will be logged to your DagsHub repository.

Here's an example notebook that can help you fine-tune your own custom dataset on the YOLO-NAS architecture with DagsHub.

Additional Resources

Known Issues, Limitations & Restrictions

The super_gradients dataloaders fail from Python> 3.9, this is due to the deprecation of Iterables from the Collections library.

Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...