Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Framework:  pytorch Integration:  git mlflow
d14a2821f3
update README
2 years ago
faecff6113
update gitignore
2 years ago
d14a2821f3
update README
2 years ago
c2070e9204
init commit
2 years ago
7dbb3d00d9
support feature shuffle permutation importance
2 years ago
7dbb3d00d9
support feature shuffle permutation importance
2 years ago
088cb8fdd6
fix bug and support permutation importance search
2 years ago
7dbb3d00d9
support feature shuffle permutation importance
2 years ago
7dbb3d00d9
support feature shuffle permutation importance
2 years ago
Storage Buckets

README.md

You have to be logged in to leave a comment. Sign In

Permutation Feature Importance

Description:

Permutation feature importance, is a method used to evaluate the importance of each feature in a machine learning model. This process is repeated for each feature, and the importance of each feature is determined by how much the model's performance decreased when that feature was shuffled or deleted. Check out the experiment tab through DagsHub Repository or from the MLflow UI

  • Feature 1-3: Numerical Value
  • Feature 4-8: Time-related Data

Performance with full feature:

base_performance

Feature Deletion for Permutation Importance:

Feature deletion is not typically used in MLPs for feature importance calculation. Feature deletion involves removing a particular feature from the input data and observing the effect on the model's performance. However, this approach is not commonly used in MLPs because it can significantly alter the model architecture and performance, making it difficult to interpret the importance of a single feature. Additionally, in MLPs, each hidden unit in a layer is typically connected to all input features, so deleting a single feature may affect the model in complex ways, making it difficult to isolate the importance of that feature. drop_fea

drop feature Val Loss
Full Model 4.044
Feature 01 4.078
Feature 02 4.082
Feature 03 4.114
Feature 04 8.039
Feature 05 4.075
Feature 06 4.034
Feature 07 4.075
Feature 08 4.402
drop_fea_bar_chart

Feature Shuffling for Permutation Importance:

In multi-layer perceptron (MLP) models, feature shuffling is typically used for calculating feature importance using permutation feature importance. Feature shuffling involves randomly permuting the values of a particular feature in the input data while keeping all other features fixed, and then observing the effect on the model's performance. permutation_importance

Shuffle feature Val Loss
Full Model 4.044
Feature 01 4.074
Feature 02 4.127
Feature 03 4.182
Feature 04 8.474
Feature 05 8.470
Feature 06 8.484
Feature 07 8.669
Feature 08 11.696
shuffle_fea_bar_chart
Tip!

Press p or to see the previous file or, n or to see the next file

About

No description

Collaborators 1

Comments

Loading...