Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Christian Werner 5c0637f1c4
Add script to aggregate results into generalized shapefiles
1 year ago
79c200e896
Move dvc cache to local commit
2 years ago
826a45e2c9
Add constants and gitignore file
3 years ago
1c02e2846e
Make the model inference type selectable (#20)
2 years ago
2fc12361f4
Run inference for multi-class setting
2 years ago
0be4c52705
Cleanup model config
2 years ago
f26b4d74ce
Add renamed dvc stubs for 2018 2020 shaoefiles
1 year ago
1d5c2316d1
Fix cli model handling for inference script and proper channel matching for ensemble inference in case of RGB data
1 year ago
1c02e2846e
Make the model inference type selectable (#20)
2 years ago
87c879ffc8
Preprocessing v2 (#53)
2 years ago
5c0637f1c4
Add script to aggregate results into generalized shapefiles
1 year ago
d2a2b56f0c
Add tests for createdataset
2 years ago
675ce01d03
Add missing .dockerignore fille
2 years ago
71df18bee0
Add dvc init files to project and dotenv requirement to setup
3 years ago
fb5a759a8a
Add pre-commit hook configuration files
3 years ago
8b22f4c05b
Add gitattributes file to fix github project lamguage analysis
2 years ago
87c879ffc8
Preprocessing v2 (#53)
2 years ago
9074b5fdc2
Move isort cfg to pyproject.toml
3 years ago
f9997053a1
Initial commit
3 years ago
c3a3680402
Update README.md
2 years ago
1c02e2846e
Make the model inference type selectable (#20)
2 years ago
bf20ee9380
Update dvc lock
1 year ago
000ae802ea
Add comment
1 year ago
05dec483e2
Add a dedicated eval script for running non-default test shards
1 year ago
87c879ffc8
Preprocessing v2 (#53)
2 years ago
15b5314fe5
Update protocol.md
2 years ago
fefc12437c
Move pytest config to pyproject.toml
2 years ago
9fc09fed52
Reverse src layout to original deadtrees source folder
2 years ago
0da47eaf55
Bump required torch versions
2 years ago
9e2ec5be4d
Slurm sweep (#45)
2 years ago
1c13f6ec8f
Adapt sweep setup - untested
2 years ago
9e2ec5be4d
Slurm sweep (#45)
2 years ago
1c13f6ec8f
Adapt sweep setup - untested
2 years ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

DeadTrees

PyTorch Lightning Config: Hydra FastAPI Streamlit


Description

Map dead trees from ortho photos. A Unet (semantic segmentation model) is trained on a ortho photo collection of Luxembourg (year: 2019). This repository contains the preprocessing pipeline, training scripts, models, and a docker-based demo app (backend: FastAPI, frontend: Streamlit).

Streamlit frontend Fig 1: Streamlit UI for interactive prediction of dead trees in ortho photos.

How to run

# clone project
git clone https://github.com/cwerner/deadtrees
cd deadtrees

# [OPTIONAL] create virtual environment (using venve, pyenv, etc.) and activate it. An easy way to get a base system configured is to use micromamba (a faster alternative to anaconda) and the fastchan channel to install the notoriously finicky pytorch base dependencies and cuda setup

wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba

# init shell (~/micromamba in the following line is the location where envs are stored, could be somewhere else)
./bin/micromamba shell init -s bash -p ~/micromamba
source ~/.bashrc

micromamba create -p deadtrees python=3.9 -c conda-forge
micromamba activate deadtrees
# install cuda-compiled foundational packages (force it for pytorch since the auto-detection often fails)
micromamba install "pytorch=*=*cuda*" torchvision albumentations -c fastchan -c conda-forge

# install requirements (basic requirements):
pip install -e . 

# [OPTIONAL] install extra requirements for training:
pip install -e ".[train]"

# [OPTIONAL] install extra requirements to preprocess the raw data
# (instead of reading preprocessed data from S3):
pip install -e ".[preprocess]"

# [ALTERNATIVE] install all subpackages:
pip install -e ".[all]"

Download the dataset from S3 (output of the createdataset dvc stage)

dvc pull createdataset

Specify the location of the training dataset on your system by creating the file .env with the following syntax:

export TRAIN_DATASET_PATH="/path_to_my_repos/deadtrees/data/dataset/train"

Train model with default configuration (you can adjust the training config on the commandline or by editing the hydra yaml files in configs):

python run.py

Tip!

Press p or to see the previous file or, n or to see the next file

About

Semantic Segmentation model for the detection of dead trees from ortho photos.

Collaborators 1

Comments

Loading...