QUICK LINKS

GETTING STARTED

COMPONENTS

TUTORIALS

Global Explanations

The global feature importance is the aggregation of features and feature importance across all the baseline data. This explainability provides a comprehensive understanding of how various features contribute to the model's predictions or outcomes on a broader scale.

To get the Global Feature Importance of Current active Model:


modelinfo.feature_importance()

We are using SHAP by default. Soon, we will have other XAI options like LIME, CEM and also our own proprietary XAI method called 'Backtrace'.

SHAP has 'Data Sampling size' as the customizable metric. This can be defined while training a new model in AryaXAI AutoML.

The below function will show the visualization of the prediction path taken by the model. If you don't see this graph, then retrain on a larger compute. Sometime, this may fail as it may not enough compute.


modelinfo.prediction_path() 
# The Tree Prediction Path the Model has using for Prediction (Only for Tree Based Models)