Quickstart
Guides
Tutorials
< Home

ML Explainability

Introduction:

ML Explainability allows you to explain your models using multiple ways - feature importance, observations & similar cases. To know more about these techniques, please go through our 'Resources' section.

Explainable AI has been taking a lot of attention in recent times, while AI adoption is increasing. The sophistication and complexity of AI systems have evolved to an extent that they are difficult for humans to comprehend. There is a commonality between the complexity of the model and the complexity of explanations - the more complex a model is, the tougher it is to explain.

From a regulatory standpoint, it becomes imperative to understand how the model reached a particular decision, and whether or not the decision was a result of any bias. It's not just from a regulation standpoint, but also from a fundamental model-building standpoint, especially for the ML team- if one can understand how the model is functioning and how it is working behind the scenes, it gets easier to find ways for improving the model.

Whereas from a business user standpoint, there needs to be confidence in the system that can be built by providing a clear understanding of how the model is working, and what would be the scope boundaries for the model. There is also a need to validate the product before it can be used in production.

AryaXAI offers multiple methods for XAI.

  • Feature importance using 'Backtrace': For Deep Learning (Local & Global)
  • Feature importance using 'SHAPE'. (Global & Local)
  • Decisioning path visualization (for tree based models) (Global & Local)
  • Observations as explanations (Local)
  • Similar cases (Local)

Page URL copied to clipboard