Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
51e869b3ac
dvc pipeline tracking has been updated
3 weeks ago
057ac508b7
cicd added
3 weeks ago
ff2a7dfe3a
model training
4 weeks ago
057ac508b7
cicd added
3 weeks ago
7e22f96cc3
prediction: model path changed, model folder added
3 weeks ago
8f8069c393
prediction pipeline changed
3 weeks ago
src
3 weeks ago
3 weeks ago
51e869b3ac
dvc pipeline tracking has been updated
3 weeks ago
187710ae60
kidney folder added to gitignore
3 weeks ago
057ac508b7
cicd added
3 weeks ago
c7b2b0395d
dvc cmd updated
3 weeks ago
d38ed1c1e1
prediction pipeline and user interface added
3 weeks ago
187710ae60
kidney folder added to gitignore
3 weeks ago
187710ae60
kidney folder added to gitignore
3 weeks ago
057ac508b7
cicd added
3 weeks ago
02af6748a4
model evaluation and mlflow
4 weeks ago
43ef836e0f
model training added
4 weeks ago
226af26a48
requirements
4 weeks ago
057ac508b7
cicd added
3 weeks ago
226af26a48
requirements
4 weeks ago
7e22f96cc3
prediction: model path changed, model folder added
3 weeks ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

kidney_diseases_classification

Workflows

  1. Update config.yaml
  2. Update secrets.yaml [Optional]
  3. Update params.yaml
  4. Update the entity
  5. Update the configuration manager in src config
  6. Update the components
  7. Update the pipeline
  8. Update the main.py
  9. Update the dvc.yaml
  10. app.py

How to run?

STEPS:

Clone the repository

https://github.com/ashmitaroy17/kidney_diseases_classification.git

STEP 01- Create a conda environment after opening the repository

conda create -n cnncls python=3.11 -y conda activate cnncls

STEP 02- install the requirements

pip install -r requirements.txt

Finally run the following command

python app.py Now,

open up you local host and port

MLflow

Documentation

cmd

  1. mlflow ui dagshub dagshub

  2. MLFLOW_TRACKING_URI=

  3. MLFLOW_TRACKING_USERNAME=

  4. MLFLOW_TRACKING_PASSWORD= python script.py

Run this to export as env variables:

export MLFLOW_TRACKING_URI= export MLFLOW_TRACKING_USERNAME

export MLFLOW_TRACKING_PASSWORD

DVC cmd

  1. dvc init
  2. dvc repro
  3. dvc dag About MLflow & DVC MLflow

Its Production Grade Trace all of your expriements Logging & taging your model DVC

Its very lite weight for POC only lite weight expriements tracker It can perform Orchestration (Creating Pipelines) AWS-CICD-Deployment-with-Github-Actions

  1. Login to AWS console.

  2. Create IAM user for deployment #with specific access

  3. EC2 access : It is virtual machine

  4. ECR: Elastic Container registry to save your docker image in aws

#Description: About the deployment

  1. Build docker image of the source code

  2. Push your docker image to ECR

  3. Launch Your EC2

  4. Pull Your image from ECR in EC2

  5. Lauch your docker image in EC2

#Policy:

  1. AmazonEC2ContainerRegistryFullAccess

  2. AmazonEC2FullAccess

  3. Create ECR repo to store/save docker image

  • Save the URI:
  1. Create EC2 machine (Ubuntu)
  2. Open EC2 and Install docker in EC2 Machine: #optinal

sudo apt-get update -y

sudo apt-get upgrade

#required

curl -fsSL https://get.docker.com -o get-docker.sh

sudo sh get-docker.sh

sudo usermod -aG docker ubuntu

newgrp docker 6. Configure EC2 as self-hosted runner: setting>actions>runner>new self hosted runner> choose os> then run command one by one 7. Setup github secrets: AWS_ACCESS_KEY_ID=

AWS_SECRET_ACCESS_KEY=

AWS_REGION = us-east-1

AWS_ECR_LOGIN_URI = demo>> 566373416292.dkr.ecr.ap-south-1.amazonaws.com

ECR_REPOSITORY_NAME = simple-app

Tip!

Press p or to see the previous file or, n or to see the next file

About

No description

Collaborators 1

Comments

Loading...