Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  dvc git
80b4989a6f
Initial commit
3 months ago
e905d74456
Update for MLflow-4: Tracking Functions
2 months ago
25890d4e5b
Added Raw Data
1 month ago
e905d74456
Update for MLflow-4: Tracking Functions
2 months ago
0c146074d6
Adding notebook
2 months ago
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
src
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
80b4989a6f
Initial commit
3 months ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

MLflow.Demo

Instructions

  1. Clone the repo.
  2. Run make dirs to create the missing parts of the directory structure described below.
  3. Optional: Run make virtualenv to create a python virtual environment. Skip if using conda or some other env manager.
    1. Run source env/bin/activate to activate the virtualenv.
  4. Run make requirements to install required python packages.
  5. Put the raw data in data/raw.
  6. To save the raw data to the DVC cache, run dvc add data/raw
  7. Edit the code files to your heart's desire.
  8. Process your data, train and evaluate your model using dvc repro or make reproduce
  9. To run the pre-commit hooks, run make pre-commit-install
  10. For setting up data validation tests, run make setup-setup-data-validation
  11. For running the data validation tests, run make run-data-validation
  12. When you're happy with the result, commit files (including .dvc files) to git.

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make dirs` or `make clean`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── processed      <- The final, canonical data sets for modeling.
│   ├── raw.dvc        <- DVC file that tracks the raw data
│   └── raw            <- The original, immutable data dump
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│   └── metrics.txt    <- Relevant metrics after evaluating the model.
│   └── training_metrics.txt    <- Relevant metrics from training the model.
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- Makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   ├── great_expectations  <- Folder containing data integrity check files
│   │   ├── make_dataset.py
│   │   └── data_validation.py  <- Script to run data integrity checks
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
├── .pre-commit-config.yaml  <- pre-commit hooks file with selected hooks for the projects.
├── dvc.lock           <- The version definition of each dependency, stage, and output from the 
│                         data pipeline.
└── dvc.yaml           <- Defining the data pipeline stages, dependencies, and outputs.

Project based on the cookiecutter data science project template. #cookiecutterdatascience


To create a project like this, just go to https://dagshub.com/repo/create and select the Cookiecutter DVC project template.

Made with 🐶 by DAGsHub.

Tip!

Press p or to see the previous file or, n or to see the next file

About

Repo For Udemy Tutorial: MLflow in Action - Master the art of MLOps using MLflow tool

Collaborators 1

Comments

Loading...