Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  dvc git mlflow
0fb7110528
Initialize DVC
1 year ago
5f209e3c1d
add models
1 year ago
5f209e3c1d
add models
1 year ago
18c66593a5
best model
1 year ago
1 year ago
0fb7110528
Initialize DVC
1 year ago
50988be3a6
Initial commit
1 year ago
1 year ago
58a6f4c51f
changes
1 year ago
1 year ago
538227fceb
mlflow integration
1 year ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

Facial Recognition Model

Project Overview

This project covers building a facial recognition model using the FER dataset. The model is built on top of a VGG19 model using transfer learning. The main repository utilizes DagsHub Client to stream data directly from the remote repository. For a more conventional pipeline clone the second branch repository which utilizes the files as dependencies in the DVC Pipeline.

Installation

While you can clone and use the main branch for this project, its primary use is more of a reference in order to provide customizability in running the project from the second branch

Main Branch
  1. Clone the main repository
  2. pip install the requirements.txt (Make sure to have python 3.X installed)
  3. Run the preprocess.py script if you want to generate the train and test files locally
  4. The files are already in the remote repository so you can simply run the model stage and analyze stage to train your model and analyze the performance
Second Branch
  1. Clone the second repository

  2. pip install the requirements.txt (Make sure to have python 3.X installed)

    Sourcing Data

    Option 1: To get the fer2013.csv file you run 'dvc pull' to get the dvc versioned files. You can also navigate to https://www.kaggle.com/datasets/ashishpatel26/facial-expression-recognitionferchallenge and download the data file directly. Make sure to place the file in the 'data/raw' subdirectory. After the data is pulled simply run dvc repro.

    Option 2: Alternatively, you can alter the dvc.yaml file and remove the fer2013.csv as a dependency in the "process_data" stage and simply utilize the DDA Python Hooks to stream the data from the remote repository. This implementation is shown in the main branch version of preprocess.py. After these changes are made you can run 'dvc repro' to run each stage.

Model Tuning

To tune your model hyperparameters you can utilize the tune_model.ipynb notebook which uses a keras tuner to find the best parameters and tracks the performance on mlflow. The tracking uri is set to this repository so if you are planning to run experiments you will need to refer to the following url for more information: https://dagshub.com/docs/integration_guide/mlflow_tracking/

Tip!

Press p or to see the previous file or, n or to see the next file

About

classify images as an emotion

Collaborators 1

Comments

Loading...