Quickstart
Guides
Tutorials
< Home

XAI: Feature Importance and Prediction Path

Feature importance

Feature importance is one of the standard ways to explain a model. It provides a very high-level overview of how the features are used by the model to arrive at the prediction output and also debug the issues in the model.

AryaXAI provides feature importance on two levels:

- Global Feature importance: Refers to the assessment of the significance of each feature across an entire dataset or project. It provides an overarching understanding of how different features contribute to the model's predictions or outcomes on a broader scale.

- Local explanations (At case-level): Focuses on evaluating the significance of features for individual predictions or instances within the dataset. It provides insights into how specific features contribute to the model's decision-making for each case or prediction outcome. This analysis is more granular, identifying the relative importance of features for particular instances, allowing for the understanding of how the model utilizes different features to arrive at predictions on a case-by-case basis.

AryaXAI uses the feature defined in the data settings and creates an XAI model to derive feature importance. In developer version of the product, it uses the first file that is uploaded as the training data to build the XAI model. In the AryaXAI enterprise version, it can use the model directly to build the XAI model.



project.models() # list all available model

project.activate_model('model_name')  # make any model active

Data settings:

When you are defining the data settings, ensure that the features are matching with the features you actually used in your model for higher accuracy of explainability. These final features are mandatory in any new file uploaded in that project.

- Model Type: Select the model type classification/regression

- UID: define the variable to be used as UID. This will be used to identify duplicate cases

- True value: Select the true value variable in the data

- Predicted Value: Select the predicted value variable in the data. If the true value is missing, AryaXAI will use the 'true' value to build the XAI model

- Feature to exclude: Exclude all the features that are not used in your modelling or are irrelevant to your model

- Exclude other UID: If there is any other UID, you can exclude them by checking this option

AryaXAI will deploy AutoML and builds an XAI model based on these settings, which will be used for deriving feature importance.

Here, you can view the Global explainability, observations and Case view. 

AryaXAI - ML Explainability dashboard
Note: When defining data features, specifically the data settings, it should be noted that these become the base for explainable model training. The feature selection that is done here should align with the final features that have been used in the model. 

Global Feature importance

Through GUI

The global feature importance dashboard displays the aggregation of features and feature importance across all the baseline data. 

AryaXAI: Global feature importance dashboard

Through SDK:

Note: By Default the active model is 'XGBoost_default' which is the AryaXai Surrogate Model.

To view all available models and set a different model as active, use the commands mentioned below.


project.models() # list all available model

project.activate_model('model_name')  # make any model active

To get the Global Feature Importance of Current active Model via SDK:


modelinfo.feature_importance()

Few additional commands:


# Get Information of active model
modelinfo = project.model_summary()
modelinfo.info()

# The Tree Prediction Path the Model has using for Prediction (Only for Tree Based Models)
modelinfo.predication_path() 

Local explanations: Case-wise

The ‘View cases’ tab displays all your data points. You can also filter among the data points using a Unique identifier, the data upload dates or the data tag. For in-depth insights into a particular case, click on ‘View’ under the ‘Options’ column in the cases summary table. This will lead you to the case view dashboard.

Through SDK

Using the below command displays a list of cases. The list allows you to apply filters using tags and search for a particular case by using its unique identifier.


project.cases(tag='Training') 

Use the below command for fetching Explainability for a case. This will use the current 'active' model.


case_info = project.case_info('unique_identifer','tag')

# Case Decision
case_info.explainability_decision() 

Note: If you change the active model, then the prediction and explanability will change as well.

Feature Importance

Through GUI

Selecting the ‘View’ option for a particular case provides a complete overview of the parameters AryaXAI is using for explainability. You can view the local features and the feature importance plot.

The feature importance plot displays the top 20 features, and you can select the ‘Show more’ tab to view all the features in your data that positively and negatively impact the prediction.  

Note: The surrogate XAI model (parallel model) uses the features (or variables) configured in data settings to build the explainability model.

Through SDK

For case-wise feature importance via SDK:


# feature importance plot
case.explainability_feature_importance()

You can filter the list of cases by utilizing tags and search for specific cases using their unique identifiers.


project.cases(tag='Training')

Additional commands:


# Case Decision
case_info.explainability_decision()

#Case Feature Importance
case_info.explainability_feature_importance()

Raw data

The ‘Raw data’ tab displays the details of the data uploaded, where you can verify if the data upload was done correctly.

AryaXAI - Raw data dashboard

Through SDK

To fetch Raw Data of all features for a particular case via SDK, use the following prompt:


# raw data
case.explainability_raw_data()

Prediction Path

The Prediction path tab displays the path followed by the Tree-based models like XGBoost or LGBoost for a particular prediction. It represents the route taken through the decision trees, showcasing which features were evaluated and the decisions made at each node until the sample reaches a leaf and a prediction is generated.

AryaXAI - Prediction path

To get the prediction path for a particular case via the SDK:


case_info.explainability_prediction_path()

Retraining the XAI model

To retrain the explainability model, you can simply modify the data settings. This can be done by selecting the ‘Update config’ option present in ‘Data settings’. Whenever the settings are modified, the explainability model is retrained again. XAI model can be retrained as many times as needed to achieve the best correlation between model prediction and the model functioning.

Page URL copied to clipboard