mirror lendingclub repo from github

Justin Hsi 09e5f45ac3 made dev_ids.pkl tracked by dvc 1 month ago
.circleci f073d4d660 trying circleci 5 months ago
.dvc 3d68615bdb full dataset catboost clf 3 months ago
data 09e5f45ac3 made dev_ids.pkl tracked by dvc 1 month ago
lendingclub 6af66a637d finished full run to 10_evaluate.dvc on 12/20/2019 1 month ago
notebooks d0af17e1b5 subset run of catboost_both with 29% clf wt until 10_evaluate.dvc 1 month ago
requirements 0630ca2100 commit all before full run for logistic regression 3 months ago
results 7226dbb0ae reran on master, full run of catboost_regr 2 months ago
results_all d0af17e1b5 subset run of catboost_both with 29% clf wt until 10_evaluate.dvc 1 month ago
run d0af17e1b5 subset run of catboost_both with 29% clf wt until 10_evaluate.dvc 1 month ago
tests 3cc2029af5 altered pipeline so base_loan_info matches api_loans 1 month ago
.bashrc 17be561263 alter dockerfile to clone all branches since should be super light 3 months ago
.dockerignore 0630ca2100 commit all before full run for logistic regression 3 months ago
.gitignore 3cc2029af5 altered pipeline so base_loan_info matches api_loans 1 month ago
.travis.yml 59392f2e09 changed permissions recurisvely 7 months ago
Dockerfile 32ba57afd4 alter dockerfile and requirement.txt 2 months ago
README.md 898711f2cf added some tests for csv cleaning, added detection for bad loan information coming from LC, added some portfolio modeling. Dev_ids set of 100k' 1 month ago
environment.yml 0630ca2100 commit all before full run for logistic regression 3 months ago
requirements.txt 32ba57afd4 alter dockerfile and requirement.txt 2 months ago
setup.py 0ba9c974a7 save before reinstall ubuntu 4 months ago
test_push.txt 685fbd4b61 try push again 3 months ago

Data Pipeline

Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

A temporary Readme

lendingclub

For data driven loan selection on lendingclub. Important packages are sklearn, pandas, numpy, pytorch, fastai.

1) Current model is RF (sklearn) + NN (pytorch). Performance was compared against picking entirely at random and picking at random within the best performing loan grade historically. 2) Investigative models are trained on old done loans and validated on newest of old done loans. 3) Models used in invest scripts are trained on all available training data.

Notes about the csvs/data

1) Even though LC only issues loasn A1-D5, they still internally have A1 - G3/5 in the loan info. I checked the interest rates and grades with the information at https://www.lendingclub.com/foliofn/rateDetail.action

Strange loans are separated out after all cleaning steps

Git Tags

Various git tags for navigating between datasets and dev/full datasets datav0.0.0 <- model/scorer.dataprocessingtype.raw_data_csvs? Each redownload of new data, increment rightmost #?

DVC Stuff

1) when want new raw_csvs: python lendingclub/csv_dl_archiving/01_download_LC_csvs.python but beware: https://dvc.org/doc/user-guide/update-tracked-file

Usage:

Advisable to set up an environment After cloning: in root dir (lendingclub) with setup.py, run pip install -e . properly setup account_info.py in user_creds (see example)

Run order (all scripts in lendingclub subdir): 1) python lendingclub/csv_dl_archiving/01_download_and_check_csvs.py 2) python lendingclub/csv_prepartion/02_unzip_csvs.py 3) python Before running clean_loan_info, have to cd to lendingclub/csv_preparation python setup.py build_ext. Will make a build dir in cd, copy the .so (unix) or .pyd(windows) to cd

Notes to self:

j_utils is imported and use in several scripts. See repo https://github.com/jmhsi/j_utils

To fix permissions troubles, I ended up adding jenkins and justin to each others groups (sudo usermod -a -G groupName userName) and doing chmod 664(774) on .fth and dataframes or other files as necessary.

Current jenkins setup runs in conda environment (based off https://mdyzma.github.io/2017/10/14/python-app-and-jenkins/) Considering moving to docker containers once I build the Dockerfiles?

Made symlink: ln -s /home/justin/projects to /var/lib/jenkins/projects so jenkins can run scripts like the actual projects directory

.fth to work with after initial data and eval prep:

'eval_loan_info.fth', 'scaled_pmt_hist.fth', 'base_loan_info.fth', 'str_loan_info.fth'