Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
General:  tutorial gradio Type:  dataset model Task:  image generation Data Domain:  computer vision Framework:  pytorch Integration:  dvc git
Abid 88450608fa
Merge branch 'dagshub' of https://dagshub.com/kingabzpro/SavtaDepth into dagshub
2 years ago
8c32d7f6bd
gradio app added
2 years ago
e706d2b204
working on the app
2 years ago
app
04923ae373
requiremnets cleaned
2 years ago
a7a1d48b55
New APP UI
2 years ago
8c32d7f6bd
gradio app added
2 years ago
5687f16743
Gradio app documentation added
2 years ago
90664747d7
50 epoch training w eval
3 years ago
src
90664747d7
50 epoch training w eval
3 years ago
3c0c5aa327
Finished data import and processing setup, bug in training step
3 years ago
068408a5fe
remove secondary requirements (i.e. not things that are explicitly installed by the user), fix normalization problem, and use tqdm for image processing progress bar
3 years ago
8c32d7f6bd
gradio app added
2 years ago
6f8c6a4a68
Added MIT license
3 years ago
c8e716848c
*Split commands of preparing environment in README file > *added -y for conda env creation for less interactions for the user
3 years ago
ea5357f8c8
Update 'README.md'
2 years ago
90664747d7
50 epoch training w eval
3 years ago
dc053a0b2c
Fixed problem with eval, metrics now make sense. This is a run with 1 epoch of training
3 years ago
04923ae373
requiremnets cleaned
2 years ago
5687f16743
Gradio app documentation added
2 years ago
9293d40b86
Added escaping slash to run_dev_env.sh so that it works in windows as well
3 years ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

Savta Depth - Monocular Depth Estimation OSDS Project

Savta Depth is a collaborative Open Source Data Science project for monocular depth estimation.

Here you will find the code for the project, but also the data, models, pipelines and experiments. This means that the project is easily reproducible on any machine, but also that you can contribute data, models, and code to it.

Have a great idea for how to improve the model? Want to add data and metrics to make it more explainable/fair? We'd love to get your help.

Demo

Open In Colab

You can use this notebook to load a model from the project and run it on an image you uploaded, to get the depth map. Once it has been saved, you can download it to use on platforms that support it (e.g. Facebook) to create 3d photos.

Gradio WebAPP

logo

SavtaDepth Gradio WebAPP

The web application was developed using the Gradio web framework that comes with powerful features and a user-friendly interface. You can upload the image or use any examples and then submit to see the result. If you see wrong results please flag them, so we can improve the performance.

app

Deploy to Hugging Face Space

Before you deploy the app to Hugging Face Spaces, you need to follow a few steps.

  1. Create an app by clicking your profile picture on the top right and then selecting New Space.

  2. Add the name of the app, license information, and select the framework.

  3. add remote HF remoted by: git remote add gradio https://huggingface.co/spaces/kingabzpro/savtadepth

  4. Add metadata on top of your README.md. The metadata contains the app_file directory, license information, and customization.

---
title: SavtaDepth
emoji: 🛋
colorFrom: gray
colorTo: yellow
sdk: gradio
app_file: app/app_savta.py
pinned: false
license: mit
---
  1. Rename the gradio_requirements.txt to requirments.txt.

  2. The app is using HF flagging function which requirest hugging face token. You need to add token in Repo secret under settings.

repo

  1. The last part is pushing the code:
git add .
git commit -m "HF deployment"
git push gradio main -f

Contributing Guide

Here we'll list things we want to work on in the project as well as ways to start contributing. If you'd like to take part, please follow the guide.

Setting up your environment to contribute

  • To get started, fork the repository on DAGsHub
  • Now, you have 3 way to set up your environment: Google Colab, local or docker. If you're not sure which one to go with, we recommend using Colab.

Google Colab

Google Colab can be looked at as your web connected, GPU powered IDE. Below is a link to a well-documented Colab notebook, that will load the code and data from this repository, enabling you to modify the code and re-run training. Notice that you still need to modify the code within the src/code/ folder, adding cells should be used only for testing things out.

You can also use this notebook to load a model from the project and run it on an image you uploaded, to get the depth map. Once it has been saved, you can download it to use on platforms that support it (e.g. Facebook) to create 3d photos.

In order to edit code files, you must save the notebook to your drive. You can do this by typing ctrl+s or cmd+s on mac.

>> SavtaDepth Colab Environment <<

NOTE: The downside of this method (if you are not familiar with Colab) is that Google Colab will limit the amount of time an instance can be live, so you might be limited in your ability to train models for longer periods of time.

This notebook is also a part of this project, in case it needs modification, in the Notebooks folder. You should not commit your version unless your contribution is an improvement to the environment.

Local

  • Clone the repository you just forked by typing the following command in your terminal:

    $ git clone https://dagshub.com/<your-dagshub-username>/SavtaDepth.git
    
  • Create a virtual environment or Conda environment and activate it

    # Create the virtual environment
    $ make env
    
    # Activate the virtual environment
    # VENV
    $ source env/bin/activate .
    
    # or Conda
    $ source activate savta_depth
    
  • Install the required libraries

    $ make load_requirements
    

    NOTE: Here I assume a setup without GPU. Otherwise, you might need to modify requirements, which is outside the scope of this readme (feel free to contribute to this).

  • Pull the dvc files to your workspace by typing:

    $ dvc pull -r origin
    $ dvc checkout #use this to get the data, models etc
    
  • After you are finished your modification, make sure to do the following:

    • If you modified packages, make sure to update the requirements.txt file accordingly.

    • Push your code to DAGsHub, and your dvc managed files to your dvc remote. For reference on the commands needed, please refer to the Google Colab notebook section – Commiting Your Work and Pushing Back to DAGsHub.

Docker

  • Clone the repository you just forked by typing the following command in your terminal:

    $ git clone https://dagshub.com/<your-dagshub-username>/SavtaDepth.git
    
  • To get your environment up and running docker is the best way to go. We use an instance of MLWorkspace.

    • You can Just run the following commands to get it started.

      $ chmod +x run_dev_env.sh
      $ ./run_dev_env.sh
      
    • Open localhost:8080 to see the workspace you have created. You will be asked for a token – enter dagshub_savta

    • In the top right you have a menu called Open Tool. Click that button and choose terminal (alternatively open VSCode and open terminal there) and type in the following commands to install a virtualenv and dependencies:

      $ make env
      $ source activate savta_depth
      

      Now when we have an environment, let's install all of the required libraries.

      Note: If you don't have a GPU you will need to install pytorch separately and then run make requirements. You can install pytorch for computers without a gpu with the following command:

      $ conda install pytorch torchvision cpuonly -c pytorch
      

      To install the required libraries run the following command:

      $ make load_requirements
      
  • Pull the dvc files to your workspace by typing:

    $ dvc pull -r dvc-remote
    $ dvc checkout #use this to get the data, models etc
    
  • After you are finished your modification, make sure to do the following:

    • If you modified packages, make sure to update the requirements.txt file accordingly.

    • Push your code to DAGsHub, and your dvc managed files to your dvc remote. For reference on the commands needed, please refer to the Google Colab notebook section – Commiting Your Work and Pushing Back to DAGsHub.


After pushing code and data to DAGsHub

  • Create a Pull Request on DAGsHub!
    • Explain what changes you are making.
    • If your changes affect data or models, make sure they are pushed to your DAGsHub dvc remote, and are included in the PR.
    • We will review your contribution ASAP, and merge it or start a discussion if needed.
  • 🐶

TODO:

  • Web UI
  • Testing various datasets as basis for training
  • Testing various models for the data
  • Adding qualitative tests for model performance (visually comparing 3d image outputs)
Tip!

Press p or to see the previous file or, n or to see the next file

About

Open Source Data Science (OSDS) Monocular Depth Estimation – Turn 2d photos into 3d photos – show your grandma the awesome results.

https://huggingface.co/spaces/kingabzpro/savtadepth
Collaborators 1

Comments

Loading...