Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  git github
Jeff Raubitschek 28b818c530
Bump version: 0.8.14 → 0.8.15
4 years ago
d2d12e5f19
Fix edgeml tests
4 years ago
a944179443
Fix docs
4 years ago
625886d2d2
fix relative link
4 years ago
b90c8a8e4e
Manually update docs
4 years ago
ed039fc8a7
Fixes for importing all the tfevents... (#596)
4 years ago
28e6aa18f2
Add sweeps to runs in public API
4 years ago
3da7ea481d
Merge branch 'master' into task/v0.8.14
4 years ago
28b818c530
Bump version: 0.8.14 → 0.8.15
4 years ago
26ae7c8534
Final test fixes
5 years ago
4d3a4d75e6
Doc fixes, CI fix
7 years ago
e245c3f8e1
Fix the frontend tests, add number of runs to the sweep page
6 years ago
e8f8940371
Feature/prompt fix (#527)
4 years ago
8a9cac0e7d
Lots of little fixes
5 years ago
0837388f81
Update changelog
4 years ago
78c623f563
Minor cleanup
6 years ago
c2dd1326d8
Fixed readme, added dev info
6 years ago
c5642e0b0e
Fixes for pip package
5 years ago
ef0911c47b
Feature/docs (#608)
4 years ago
7e0e3c4f0e
Revert "GitBook: [master] 68 pages and 5 assets modified"
4 years ago
bf06435487
Set theme jekyll-theme-minimal
6 years ago
9e2c8e741f
Feature/windows no run (#604)
4 years ago
08acd87709
Board reloading improved, attempt to get circle running again
6 years ago
487cf679e8
Fix wandb.watch, add unwatch, fix sweep bug (#659)
4 years ago
ca17865526
Nightly cpu builds are busted
4 years ago
d86faecb1a
Fix stats sleeping, better messaging, fixed tests
6 years ago
6c9f45a431
Apply Lukes fix (#649)
4 years ago
be31ab3353
Bump pillow from 5.0.0 to 6.2.0 (#653)
4 years ago
6af53f7412
Update tests to run latest tensorflows, fix missing dataset_ops (#625)
4 years ago
d75b5d81aa
add PyYAML dependency for CVE (#384)
4 years ago
b6fa509461
Pin rx version
5 years ago
be31ab3353
Bump pillow from 5.0.0 to 6.2.0 (#653)
4 years ago
28b818c530
Bump version: 0.8.14 → 0.8.15
4 years ago
28b818c530
Bump version: 0.8.14 → 0.8.15
4 years ago
0861fa8c58
Try keras in tox to prevent timeout in requirements
6 years ago
Storage Buckets

README.md

You have to be logged in to leave a comment. Sign In


Weights and Biases ci pypi

Use W&B to organize and analyze machine learning experiments. It's framework-agnostic and lighter than TensorBoard. Each time you run a script instrumented with wandb, we save your hyperparameters and output metrics. Visualize models over the course of training, and compare versions of your models easily. We also automatically track the state of your code, system metrics, and configuration parameters.

Sign up for a free account →

Features

  • Store hyper-parameters used in a training run
  • Search, compare, and visualize training runs
  • Analyze system usage metrics alongside runs
  • Collaborate with team members
  • Replicate historic results
  • Run parameter sweeps
  • Keep records of experiments available forever

Documentation →

Quickstart

pip install wandb

In your training script:

import wandb
# Your custom arguments defined here
args = ...

wandb.init(config=args, project="my-project")
wandb.config["more"] = "custom"

def training_loop():
    while True:
        # Do some machine learning
        epoch, loss, val_loss = ...
        # Framework agnostic / custom metrics
        wandb.log({"epoch": epoch, "loss": loss, "val_loss": val_loss})

If you're already using Tensorboard or TensorboardX, you can integrate with one line:

wandb.init(sync_tensorboard=True)

Running your script

Run wandb login from your terminal to signup or authenticate your machine (we store your api key in ~/.netrc). You can also set the WANDB_API_KEY environment variable with a key from your settings.

Run your script with python my_script.py and all metadata will be synced to the cloud. You will see a url in your terminal logs when your script starts and finishes. Data is staged locally in a directory named wandb relative to your script. If you want to test your script without syncing to the cloud you can set the environment variable WANDB_MODE=dryrun.

If you are using docker to run your code, we provide a wrapper command wandb docker that mounts your current directory, sets environment variables, and ensures the wandb library is installed. Training your models in docker gives you the ability to restore the exact code and environment with the wandb restore command.

Web Interface

Sign up for a free account → Watch the video Introduction video →

Detailed Usage

Framework specific and detailed usage can be found in our documentation.

Testing

To run the tests we use pytest tests. If you want a simple mock of the wandb backend and cloud storage you can use the mock_server fixture, see tests/test_cli.py for examples.

We use circleci and appveyor for CI.

Tip!

Press p or to see the previous file or, n or to see the next file

About

🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.

Collaborators 1

Comments

Loading...