QUICK LINKS

GETTING STARTED

COMPONENTS

TUTORIALS

The 'View Cases' tab in ML Explainability displays a comprehensive list of all cases within the uploaded data. Users have the ability to filter cases based on various criteria such as Unique Identifier, start and end date, and tag.

For detailed analysis of each case, users can utilize the 'View' option, which provides a complete overview of the parameters utilized by AryaXAI for explainability.

Feature Importance

The 'Explainability' tab in Case view showcases local features and the feature importance plot. Users can observe all features in the data that positively and negatively influence the prediction. Additionally, users can adjust the number of features displayed in the plot using the provided bar on the left side of the plot. 

Raw Data and Engineered Data

In AryaXAI, there is a convenient feature that allows users to segregate raw data and engineered data using a slide button. This feature provides an easy way to switch between viewing and working with the original raw data and the processed engineered data, since your modelis trained on the engineered data. 

Observations

Below the feature importance graph, the 'Observation' section displays a list of all observations that hold true for the particular case. Selecting the ‘Advanced view’ option provides additional details on the observations. The ‘Success’ column here displays whether the particular observation is running on the. ‘Triggered’ will show if the observation is relevant to the current case.

Observations score:

Observation score is the sum of feature importance of linked features.

Similar cases as explanations

'Similar cases,' also known as reference explanations, parallels the concept of citing references for a prediction. This method extracts the most similar cases from the training data compared to the 'prediction case.' The similarity algorithm employed depends on the plan. In the AryaXAI Developer version, the 'prediction probability' similarity method is used, while the AryaXAI Enterprise version offers additional methods such as 'Feature Importance Similarity' and 'Data Similarity.'

This tab showcases similar cases from past data where the prediction was either similar or nearly identical. The features are visualized in a graph, allowing for filtering based on data labels.

Below the graph, all similar cases are listed, and filtering options based on feature name are available. Users can also view details of any listed similar case through the 'view' option.

Prediction path

The Prediction path tab displays the path followed by the Tree-based models like XGBoost or LGBoost for a particular prediction. It represents the route taken through the decision trees, showcasing which features were evaluated and the decisions made at each node until the sample reaches a leaf and a prediction is generated.