Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  git github
03915f7389
add initial notebooks
4 years ago
800f8c62f9
add greedy rule list
3 years ago
800f8c62f9
add greedy rule list
3 years ago
82b20314c2
installable brl
4 years ago
df31faa008
add license
4 years ago
32615a8089
minor updates
4 years ago
4babd52513
update readme
3 years ago
d4b0d4980b
pip install instead of setup.py
4 years ago
Storage Buckets

readme.md

You have to be logged in to leave a comment. Sign In

Interpretability demos + implementations

Demos of how to use various interpretability techniques (accompanying slides here) and code for implementations of interpretable machine learning models.

Implementations of interpretable models

Provides scikit-learn style wrappers/implementations of different interpretable models (see readmes in individual folders within imodels for details)

The interpretable models within the imodels folder can be easily installed and used.

pip install git+https://github.com/csinva/interpretability-implementations-demos

from imodels import RuleListClassifier, RuleFit, GreedyRuleList, SLIM
model = RuleListClassifier() # Bayesian Rule List
model.fit(X_train, y_train)
model.score(X_test, y_test)
preds = model.predict(X_test)

Demo notebooks

The demos are contained in 3 main notebooks, following this cheat-sheet:cheat_sheet

  1. model_based.ipynb - how to use different interpretable models
  2. posthoc.ipynb - different simple analyses to interpret a trained model
  3. uncertainty.ipynb - code to get uncertainty estimates for a model

References / further reading

Tip!

Press p or to see the previous file or, n or to see the next file

About

Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).

Collaborators 1

Comments

Loading...