Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  git github
03915f7389
add initial notebooks
4 years ago
82b20314c2
installable brl
4 years ago
4e6062b23b
notes on pip installation
4 years ago
82b20314c2
installable brl
4 years ago
df31faa008
add license
4 years ago
64b2e1555e
add cheat sheet
4 years ago
4e6062b23b
notes on pip installation
4 years ago
d4b0d4980b
pip install instead of setup.py
4 years ago
Storage Buckets

readme.md

You have to be logged in to leave a comment. Sign In

Interpretability implementations + demos

Code for implementations of interpretable machine learning models and demos of how to use various interpretability techniques (with accompanying slides here).

Code implementations

Provides scikit-learn style wrappers/implementations of different interpretable models (see readmes in individual folders within imodels for details)

Demo notebooks

The demos are contained in 3 main notebooks, summarized in cheat_sheet.pdf

  1. model_based.ipynb - how to use different interpretable models
  2. posthoc.ipynb - different simple analyses to interpret a trained model
  3. uncertainty.ipynb - code to get uncertainty estimates for a model

Installation / quickstart

The interpretable models within the imodels folder can be easily installed and used.

pip install git+https://github.com/csinva/interpretability-implementations-demos

from imodels.bayesian_rule_lists import RuleListClassifier
model = RuleListClassifier()
model.fit(X_train, y_train)
model.score(X_test, y_test)
preds = model.predict(X_test)

References / further reading

Tip!

Press p or to see the previous file or, n or to see the next file

About

Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).

Collaborators 1

Comments

Loading...