Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  dvc git
c86a209091
updated example branch to include great expectations
1 year ago
50af60c152
Added precommit hooks
1 year ago
df34063791
Initial commit
5 years ago
df34063791
Initial commit
5 years ago
adf63a84b3
added empty metric files
1 year ago
src
c86a209091
updated example branch to include great expectations
1 year ago
88df1c0333
Updating CookieCutter to DVC 2.0
1 year ago
df34063791
Initial commit
5 years ago
c86a209091
updated example branch to include great expectations
1 year ago
a91a53b7f1
expanded pre-commit configuration based on feedback from users
1 year ago
50af60c152
Added precommit hooks
1 year ago
5dd2776f81
fixed final reference to models
1 year ago
d1c07a22f7
Remove unnecessary tox.ini
1 year ago
f8d8aa1c80
restore dvc.lock
1 year ago
c4d18906c4
fix readme and DVC pipeline, adding more context and removing files that might cause confusion
1 year ago
ff4057c935
Start replacing Deepchecks with Great Expectations
1 year ago
50af60c152
Added precommit hooks
1 year ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

Cookiecutter-MLOps

A cookiecutter template employing MLOps best practices, so you can focus on building machine learning products while having MLOps best practices applied.

Instructions

  1. Clone the repo.
  2. Run make dirs to create the missing parts of the directory structure described below.
  3. Optional: Run make virtualenv to create a python virtual environment. Skip if using conda or some other env manager.
    1. Run source env/bin/activate to activate the virtualenv.
  4. Run make requirements to install required python packages.
  5. Put the raw data in data/raw.
  6. To save the raw data to the DVC cache, run dvc add data/raw
  7. Edit the code files to your heart's desire.
  8. Process your data, train and evaluate your model using dvc repro or make reproduce
  9. To run the pre-commit hooks, run make pre-commit-install
  10. For setting up data validation tests, run make setup-setup-data-validation
  11. For running the data validation tests, run make run-data-validation
  12. When you're happy with the result, commit files (including .dvc files) to git.

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make dirs` or `make clean`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│   └── metrics.txt    <- Relevant metrics after evaluating the model.
│   └── training_metrics.txt    <- Relevant metrics from training the model.
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   ├── great_expectations  <- Folder containing data integrity check files
│   │   ├── make_dataset.py
│   │   └── data_validation.py  <- Script to run data integrity checks
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
├── .pre-commit-config.yaml  <- pre-commit hooks file with selected hooks for the projects.
├── dvc.lock           <- constructs the ML pipeline with defined stages.
└── dvc.yaml           <- Traing a model on the processed data.

Project based on the cookiecutter data science project template. #cookiecutterdatascience


To create a project like this, just go to https://dagshub.com/repo/create and select the Cookiecutter DVC project template.

Made with 🐶 by DAGsHub.

Tip!

Press p or to see the previous file or, n or to see the next file

About

A reasonably standardized but flexible project structure that implements sound MLOps principles for building your next machine learning project

Collaborators 4

Comments

Loading...