Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Chris Van Pelt 8305d2cc12
Improved sagemaker support
6 years ago
7d0128ca41
fix sndfile lib
6 years ago
ea298f5e5d
Initial commit
8 years ago
29e89dcf51
tslint, prettier & improve settings
7 years ago
8f2a52fd39
lot of UI cleanup for graphs and night mode
7 years ago
085e7b006d
More tests to work but ok
6 years ago
8305d2cc12
Improved sagemaker support
6 years ago
8305d2cc12
Improved sagemaker support
6 years ago
4d3a4d75e6
Doc fixes, CI fix
8 years ago
e245c3f8e1
Fix the frontend tests, add number of runs to the sweep page
7 years ago
aea4812d75
Improve test coverage
7 years ago
2263caef13
Initial pass at changelog
6 years ago
78c623f563
Minor cleanup
7 years ago
c2dd1326d8
Fixed readme, added dev info
7 years ago
ddb15e3049
Include our sub directories
7 years ago
d9f84bfb76
Use twine for pypi interaction, cleanup Makefile
7 years ago
bdb48c8830
Update README.md
6 years ago
bf06435487
Set theme jekyll-theme-minimal
7 years ago
08acd87709
Board reloading improved, attempt to get circle running again
7 years ago
fa4dc80dea
test updates
6 years ago
9503522626
Python 2 test with old torch
6 years ago
d86faecb1a
Fix stats sleeping, better messaging, fixed tests
7 years ago
bac76ee4f3
util.downsample(): convert generators to lists
7 years ago
fdb04cc9c4
psutil now optional
6 years ago
bdfca1edcc
Bump version: 0.6.26 → 0.6.27
6 years ago
7dd5d48859
no more third part deps
6 years ago
0861fa8c58
Try keras in tox to prevent timeout in requirements
7 years ago
Storage Buckets

README.md

You have to be logged in to leave a comment. Sign In


Weights and Biases ci pypi

The W&B client is an open source library and CLI (wandb) for organizing and analyzing your machine learning experiments. Think of it as a framework-agnostic lightweight TensorBoard that persists additional information such as the state of your code, system metrics, and configuration parameters.

Features

  • Store config parameters used in a training run
  • Associate version control with your training runs
  • Search, compare, and visualize training runs
  • Analyze system usage metrics alongside runs
  • Collaborate with team members
  • Run parameter sweeps
  • Persist runs forever

Quickstart

pip install wandb

In your training script:

import wandb
# Your custom arguments defined here
args = ...

run = wandb.init(config=args)
run.config["more"] = "custom"

def training_loop():
    while True:
        # Do some machine learning
        epoch, loss, val_loss = ...
        # Framework agnostic / custom metrics
        wandb.log({"epoch": epoch, "loss": loss, "val_loss": val_loss})

Running your script

From the directory of your training script run wandb init to initialize a new directory. If it's your first time using wandb on the machine it will prompt you for an API key - create an account at wandb.com and you can find one in your profile page. You can check in wandb/settings directory to version control to share your project with other users. You can also set the username and API key through environment variables if you don't have easy access to a shell.

Run your script with python my_script.py and all metadata will be synced to the cloud. Data is staged locally in a directory named wandb relative to your script. If you want to test your script without syncing to the cloud you can run wandb off.

Runs screenshot

Detailed Usage

Framework specific and detailed usage can be found in our documentation.

Tip!

Press p or to see the previous file or, n or to see the next file

About

🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.

Collaborators 1

Comments

Loading...