Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  git github
04c40a237c
Update main.yml
2 years ago
app
f3341aa022
Upload screenshots
2 years ago
fa9717ce4b
Upload files in data folder
2 years ago
1e531d47eb
Upload model files
2 years ago
c1c52d40a9
Upload notebooks
2 years ago
05ecf50d38
Upload web app files
2 years ago
f521a4f00b
Upload Procfile + requirements for Heroku deployment
2 years ago
9fa22d8446
Fix: misspelling in table of contents
2 years ago
f521a4f00b
Upload Procfile + requirements for Heroku deployment
2 years ago
Storage Buckets

README.md

You have to be logged in to leave a comment. Sign In

Disaster Response Pipeline Project

Intro Picture

Table of Contents

  1. Installation
    1. Instructions
  2. Project Motivation
  3. File Descriptions
    1. File Structure
  4. Results
  5. Licensing, Authors, Acknowledgements

Installation

git clone the repository

https://github.com/linnforsman/disaster-response-pipeline.git

Instructions:

  1. Run the following commands in the project's root directory to set up your database and model.
  • To run ETL pipeline that cleans data and stores in database: python data/process_data.py data/disaster_messages.csv data/disaster_categories.csv data/DisasterResponse.db

  • To run ML pipeline that trains classifier and saves: python models/train_classifier.py data/DisasterResponse.db models/classifier.pkl

  1. Run the following command in the app's directory to run your web app: python run.py

  2. Go to http://0.0.0.0:3001/

Project Motivation

This project is part of the Data Scientist Nanodegree by Udacity in collaboration with Figure Eight. The dataset contains pre-labelled tweet and messages from real-life disaster events. The project aim is to build a Natural Language Processing model to categorize messages on a real time basis.

File Descriptions

  1. data/process_data.py: This file contains the ETL pipeline that processes the raw data and stores it in the database.
  2. models/train_classifier.py: This file contains the ML pipeline that trains the classifier and saves it to the database.
  3. app/templates/*.html: This directory contains the html templates for the web app.
  4. run.py: This file contains the flask app that runs the web app.

File Structure

app
| - template
| |- master.html # main page of web app
| |- go.html # classification result page of web app
|- run.py # Flask file that runs app
data
|- disaster_categories.csv # data to process
|- disaster_messages.csv # data to process
|- process_data.py
|- InsertDatabaseName.db # database to save clean data to
models
|- train_classifier.py
|- classifier.pkl # saved model
notebooks
|- ETL Pipeline Preparation.ipynb # Jupyter Notebook
|- ML Pipeline Preparation.ipynb # Jupyter Notebook
README.md

Results

  1. This is an example of a message we can type to test the performance of the model. Disaster Response Pipeline
  2. After clicking Classify Message, we can see the categories which the message belongs to highlighted in green. Disaster Response Pipeline

Licensing, Authors Acknowledgements

The data was provided by Figure Eight in collaboration with Udacity.

Tip!

Press p or to see the previous file or, n or to see the next file

About

A Flask web app that classifies messages into appropriate disaster categories.

Collaborators 1

Comments

Loading...