Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  git github
Chandan Singh 4dd5634b40
Merge pull request #26 from csinva/update_requirements
3 years ago
a1aa4ac8b8
update readme
3 years ago
dc0c8db0e6
Update contributing.md
3 years ago
3dfc4a8fc8
add boosted rule set
3 years ago
f0fa9ea420
automated notebook testing
3 years ago
2cf7fba45a
print test coverage
3 years ago
5c4e53bf9e
suppress skope UserWarnings
3 years ago
f0fa9ea420
automated notebook testing
3 years ago
c54e32ea89
clarify regressor/classifier
3 years ago
e1a6805fa1
factor out discretization
3 years ago
a1aa4ac8b8
update readme
3 years ago
0d3bc1a824
update setup.py
3 years ago
Storage Buckets

readme.md

You have to be logged in to leave a comment. Sign In

Interpretable machine learning models (imodels) 🔍

Python package for concise, transparent, and accurate predictive modeling. All sklearn-compatible and easily understandable. Pull requests very welcome!

Docs Popular imodels Custom imodels Demo notebooks

Implementations of different interpretable models, all compatible with scikit-learn. The interpretable models can be easily used and installed:

from imodels import BayesianRuleListClassifier, GreedyRuleListClassifier, SkopeRulesClassifier
from imodels import SLIMRegressor, RuleFitRegressor

model = BayesianRuleListClassifier()  # initialize a model
model.fit(X_train, y_train)   # fit model
preds = model.predict(X_test) # discrete predictions: shape is (n_test, 1)
preds_proba = model.predict_proba(X_test) # predicted probabilities: shape is (n_test, n_classes)

Install with pip install imodels (see here for help). Contains the following models:

Model Reference Description
Rulefit rule set 🗂️, 🔗, 📄 Extracts rules from a decision tree then builds a sparse linear model with them
Skope rule set 🗂️, 🔗 Extracts rules from gradient-boosted trees, deduplicates them, then forms a linear combination of them based on their OOB precision
Boosted rule set 🗂️, 🔗, 📄 Uses Adaboost to sequentially learn a set of rules
Bayesian rule list 🗂️, 🔗, 📄 Learns a compact rule list by sampling rule lists (rather than using a greedy heuristic)
Greedy rule list 🗂️, 🔗 Uses CART to learn a list (only a single path), rather than a decision tree
OneR rule list 🗂️, 📄 Learns rule list restricted to only one feature
Optimal rule tree 🗂️, 🔗, 📄 (In progress) Learns succinct trees using global optimization rather than greedy heuristics
Iterative random forest 🗂️, 🔗, 📄 (In progress) Repeatedly fit random forest, giving features with high importance a higher chance of being selected.
Sparse integer linear model 🗂️, 📄 Forces coefficients to be integers
Rule sets (Coming soon) Many popular rule sets including SLIPPER, Lightweight Rule Induction, MLRules

Docs 🗂️, Reference code implementation 🔗, Research paper 📄

Custom interpretable models

The final form of the above models takes one of the following forms, which aim to be simultaneously simple to understand and highly predictive:

Rule set Rule list Rule tree Algebraic models

Different models and algorithms vary not only in their final form but also in different choices made during modeling. In particular, many models differ in the 3 steps of the table below. For example, RuleFit and SkopeRules differ only in the way they prune rules: RuleFit uses a linear model whereas SkopeRules heuristically deduplicates rules sharing overlap. As another example, Bayesian rule lists and greedy rule lists differ in how they select rules; bayesian rule lists perform a global optimization over possible rule lists while Greedy rule lists pick splits sequentially to maximize a given criterion. See the docs for individual models for futher descriptions.

Rule candidate generation Rule selection Rule pruning / combination

The code here contains many useful and readable functions for rule-based learning in the util folder. This includes functions / classes for rule deduplication, rule screening, converting between trees, rulesets, and neural networks.

Demo notebooks

Demos are contained in the notebooks folder.

  • imodels_demo.ipynb, demos the imodels package. It shows how to fit, predict, and visualize with different interpretable models
  • this notebook shows an example of using imodels for deriving a clinical decision rule
  • we also include some demos of posthoc analysis, which occurs after fitting models
    • posthoc.ipynb - shows different simple analyses to interpret a trained model
    • uncertainty.ipynb - basic code to get uncertainty estimates for a model

References

  • Readings
    • Interpretable ML good quick overview: murdoch et al. 2019, pdf
    • Interpretable ML book: molnar 2019, pdf
    • Case for interpretable models rather than post-hoc explanation: rudin 2019, pdf
    • Review on evaluating interpretability: doshi-velez & kim 2017, pdf
  • Reference implementations (also linked above): the code here heavily derives from the wonderful work of previous projects. We seek to to extract out, unify, and maintain key parts of these projects.
  • Compatible packages
  • Related packages
    • gplearn for symbolic regression/classification
    • pygam for generative additive models

For updates, star the repo, see this related repo, or follow @csinva_. Please make sure to give authors of original methods / base implementations appropriate credit!

Tip!

Press p or to see the previous file or, n or to see the next file

About

Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).

Collaborators 1

Comments

Loading...