Quickstart
Guides
Tutorials
< Home

ML Explainability

Introduction:

AryaXAI's ML Explainability toolkit allows you to easily explain your models using multiple methods, such as highlighting feature impact, observations, and similar cases.

The sophistication and complexity of AI systems have evolved to the extent that they are difficult for humans to comprehend. Understanding the model's decision-making process and identifying any biases is crucial from both a regulatory and model-building perspective. Business users require a clear understanding of the model's operations and validation before using it in production.

Ensure your models work as you truly intend with AryaXAI.

AryaXAI offers multiple methods for XAI:

- Feature importance using 'Backtrace': For Deep Learning (Local & Global)

- Feature importance using 'SHAPE'. (Global & Local)

- Decisioning path visualization (for tree based models) (Global & Local)

- Observations as explanations (Local)

- Similar cases (Local)

Feature importance

Feature importance is one of the standard ways used in Machine learning (ML) explainability to understand the contribution of each input feature in making predictions. It provides a very high-level overview of how the features are used by the model to arrive at the prediction output and also debug the issues in the model.

By examining feature importance, users can identify which variables are most influential in the model's decision-making process. This insight aids in understanding the model's behavior, identifying potential biases, and improving overall interpretability and trustworthiness.

AryaXAI offers feature importance analysis at two levels:

- Global Feature importance: This assessment evaluates the significance of each feature across an entire dataset or project. It provides a comprehensive understanding of how various features contribute to the model's predictions or outcomes on a broader scale.

- Local explanations (At case-level): Focuses on evaluating the significance of features for individual predictions or instances within the dataset. It provides insights into how specific features contribute to the model's decision-making for each case or prediction outcome. This analysis is more granular, identifying the relative importance of features for particular instances, allowing for the understanding of how the model utilizes different features to arrive at predictions on a case-by-case basis.

AryaXAI uses the feature defined in the data settings and creates an XAI model to derive feature importance. In the developer version, the first uploaded file serves as the training data for building the XAI model. However, in the AryaXAI enterprise version, the model can be directly utilized to construct the XAI model.

Page URL copied to clipboard