No Description

simon 6e5e6cb94a updated annotations 1 month ago
.dvc 04991fdb25 changed refs to baby-yoda 1 month ago
data 6e5e6cb94a updated annotations 1 month ago
models
src 1e27efb038 added notebook 1 month ago
.dvcignore 467dc8f6ae init 1 month ago
.gitignore 28b2864474 saved model 1 month ago
.python-version c93c6f995f fixed python env 1 month ago
Readme.md a68d72d34c fixed code snippet in readme 1 month ago
baby-yoda-preview.png a3d98abf74 updated readmy to baby yoda 1 month ago
dvc.lock 35e8122bc1 trained model 1 month ago
dvc.yaml 4f5882fdec added output to auto-train 1 month ago
metrics.csv 35e8122bc1 trained model 1 month ago
params.yml a0690c7df7 saved model 1 month ago
poetry.lock c93c6f995f fixed python env 1 month ago
pyproject.toml c93c6f995f fixed python env 1 month ago
requirements.txt 40826f2a32 trained model 1 month ago

Data Pipeline

Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

Readme.md

Baby Yoda Segmentor

Instance segmentation model for detection of the character Baby Yoda, from the Disney TV Show The Mandalorian.

Grogu Baby Yoda segmentation

Dataset

This repository depends on baby-yoda-segmentation-dataset, built as a living-dataset.

How to train

Locally

  1. Install the project dependencies with poetry
  2. Change params, code or data as needed
  3. Run
   dvc repro auto-train

The model will be saved in models/model.pth

Using Google Colab

  1. Open the Colab Notebook. It is also available in src/ColabNotebook.ipynb
  2. Run according to the steps

Using the model

Get the model

With DVC

In a DVC repository run:

dvc import https://dagshub.com/simon/baby-yoda-segmentor models/model.pth

Simple download

curl -O https://dagshub.com/Simon/baby-yoda-segmentor/raw/master/models/model.pth

Load and use the model

from PIL import Image
from torchvision.transforms import ToTensor

device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')

img = Image.open("image.png")
img_t = ToTensor()(img)
model.eval()
with torch.no_grad():
    prediction = model([img_t.to(device)])

How to contribute

  1. Fork the repository

    git clone <fork-url>
    dvc pull -r origin
    
  2. Do your changes

  3. Train the model

  4. Add a local remote to push your data

    dvc remote add --local fork <dagshub-remote-url.dvc>
    # Additional commands to set up credentials should appear on you fork homepage
    
  5. Push your code and data

    dvc push -r fork
    git add .
    git commit -m "Changes to dataset"
    git push
    
  6. Open a PR