Are you sure you want to delete this access key?
SuperGradients automatically logs multiple files locally that can help you explore your experiments results. This includes 1 tensorboard and 3 .txt files. Absolutely. I understand your requirements. Here's a more concise and structured introduction:
For a deeper dive into checkpoints, visit our detailed guide.
To easily keep track of your experiments, SuperGradients saves your results in events.out.tfevents
format that can be used by tensorboard.
What does it include? This tensorboard includes all of your training and validation metrics but also other information such as learning rate, system metrics (CPU, GPU, ...), and more.
Where is it saved? <ckpt_root_dir>/<experiment_name>/<run_dir>/events.out.tfevents.<unique_id>
How to launch? tensorboard --logdir <ckpt_root_dir>/<experiment_name>/<run_dir>
In case you cannot launch a tensorboard instance, you can still find a summary of your experiment saved in a readable .txt format.
What does it include? The experiment configuration and training/validation metrics.
Where is it saved? <ckpt_root_dir>/<experiment_name>/<run_dir>/experiment_logs_<date>.txt
For better debugging and understanding of past runs, SuperGradients gathers all the print statements and logs into a local file, providing you the convenience to review console outputs of any experiment at any time.
What does it include? All the prints and logs that were displayed on the console, but not the filtered logs.
Where is it saved?
~/sg_logs/console.log
.super_gradients.Trainer
, all console outputs and logs will be redirected to the experiment folder <ckpt_root_dir>/<experiment_name>/<run_dir>/console_<date>.txt
.How to set log level? You can filter the logs displayed on the console by setting the environment variable CONSOLE_LOG_LEVEL=<LOG-LEVEL> # DEBUG/INFO/WARNING/ERROR
Contrary to the console logging, the logger logging is restricted to the loggers messages (such as logger.log
, logger.info
, ...).
This means that it includes any log that was under the logging level (logging.DEBUG
for instance), but not the prints.
What does it include? Anything logged with a logger (logger.log
, logger.info
, ...), even the filtered logs.
Where is it saved? <ckpt_root_dir>/<experiment_name>/<run_dir>/logs_<date>.txt
How to set log level? You can filter the logs saved in the file by setting the environment variable FILE_LOG_LEVEL=<LOG-LEVEL> # DEBUG/INFO/WARNING/ERROR
Only when training using hydra recipe.
What does it include?
<ckpt_root_dir>/<experiment_name>/<run_dir>/
└─ .hydra
├─config.yaml # A single config file that regroups the config files used to run the experiment
├─hydra.yaml # Some Hydra metadata
└─overrides.yaml # Any override passed after --config-name=<config-name>
<ckpt_root_dir>/<experiment_name>/<run_dir>/
├─ ... (all the model checkpoints)
├─ events.out.tfevents.<unique_id> # Tensorboard artifact
├─ experiment_logs_<date>.txt # Config and metrics related to experiment
├─ console_<date>.txt # Logs and prints that were displayed in the users console
├─ logs_<date>.txt # Every log
└─ .hydra # (Additional) If experiment launched from a recipe:
├─config.yaml # A single config file that regroups the config files used to run the experiment
├─hydra.yaml # Some Hydra metadata
└─overrides.yaml # Any override passed after --config-name=<config-name>
SuperGradients automatically checks compatibility between the installed libraries and the required ones. It will log an error - but not stop the code - for each library that was installed with a version lower than required. For libraries with version higher than required, this information will just be logged at a DEBUG level.
It can sometimes be very time consuming to debug an exceptions when the error raised is not explicit. To avoid this, SuperGradients implemented a Crash Tip system that decorates errors raised from different libraries to help you fix the issue.
Example
The error raised by hydra when you made an indentation error is hard to understand (see topmost RuntimeError).
Under the exception, SuperGradients prints a Crash Tip that explains what went wrong, and how to fix it.
The number of crash tips is limited to cases that were faced by the community, so if you face an exception that is hard to understand feel free to share with us!
How to disable? The Crash tip can be shut down by setting the environment variable CRASH_HANDLER=FALSE
.
Press p or to see the previous file or, n or to see the next file
Browsing data directories saved to S3 is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
super-gradients is now integrated with AWS S3!
Are you sure you want to delete this access key?
Browsing data directories saved to Google Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
super-gradients is now integrated with Google Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to Azure Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
super-gradients is now integrated with Azure Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to S3 compatible storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
super-gradients is now integrated with your S3 compatible storage!
Are you sure you want to delete this access key?