Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

#869 Add DagsHub Logger to Super Gradients

Merged
Ghost merged 1 commits into Deci-AI:master from timho102003:dagshub_logger
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
  1. # Varaible setup for shortcuts and setting the hydra output directory.
  2. # Any SG Recipe should set this yaml file as a default, after _self_, i.e at the top of your recipe file:
  3. #
  4. # defaults:
  5. # - training_hyperparams: my_train_params
  6. # - dataset_params: my_dataset_params
  7. # - arch_params: my_arch_params
  8. # - checkpoint_params: my_checkpoint_params
  9. # - _self_
  10. # - variable_setup
  11. #
  12. #
  13. # Interpolates the shortcuts defined below, with their aliases (see comments near each parameter).
  14. # When any of the above are not set, they will be populated with the original values (for example
  15. # config.lr will be set with config.training_hyperparams.initial_lr) for clarity in logs.
  16. #
  17. # In other words, the following training launch commands are equivalent:
  18. #
  19. # python train_from_recipe --config-name=recipe lr=0.003
  20. #
  21. # python train_from_recipe --config-name=recipe config.training_hyperparams.initial_lr=0.003
  22. #
  23. # Note that interpolation is done by triggering RecipeShortcutsCallbackm which is a Hydra Callback (see http://hydra.cc/docs/experimental/callbacks/)
  24. # so interpolation of these in other yaml configuration files won't be present.
  25. lr: # config.training_hyperparams.initial_lr
  26. batch_size: # config.dataset_params.train_dataloader_params.batch_size
  27. val_batch_size: # config.dataset_params.val_dataloader_params.batch_size
  28. ema: # config.training_hyperparams.ema
  29. epochs: # config.training_hyperparams.max_epochs
  30. resume: # config.training_hyperparams.resume
  31. num_workers: # config.dataset_params.train_dataloader_params.num_workers and config.dataset_params.val_dataloader_params.num_workers
  32. ckpt_root_dir:
  33. # THE FOLLOWING PARAMS ARE DIRECTLY USED BY HYDRA
  34. hydra:
  35. callbacks:
  36. shortcuts_cb:
  37. _target_: super_gradients.common.environment.omegaconf_utils.RecipeShortcutsCallback
  38. run:
  39. # Set the output directory (i.e. where .hydra folder that logs all the input params will be generated)
  40. dir: ${hydra_output_dir:${ckpt_root_dir}, ${experiment_name}}
Discard
Tip!

Press p or to see the previous file or, n or to see the next file