AryaXAI in ML Lifecycle
In addition to well-tested open source XAI algos, AryaXAI uses proprietary XAI algo to provide true-to-model explainability for complex techniques like deep learning.
Provides multiple types of explanations
Different use cases require different types of explanations for different users. Users can use AryaXAI to provide various other types of explanations like similar cases and what-ifs.
Plug and use techniques that work
You can deploy any XAI method on AryaXAI by simply uploading your model and sending the engineered features. The framework knows how to handle various datasets.
Near real-time explainability through API
Based on the complexity of your model, AryaXAI can provide near real-time explanations through API. You can integrate them into downstream systems of your choice.
Decode, Debug & Describe your models
AI solutions are evaluated based on performance and their ability to be transparent and explainable to all key stakeholders - Product/Business Owners, Data Scientists, IT, Risk Owners, Regulators and Customers. It is fundamentally required for mission-critical AI use cases and can be the deciding factor for usage.
AryaXAI's explainable AI methods go beyond open source approaches to ensure the models are explained accurately and consistently. And for complex techniques like ‘deep learning’, AryaXAI uses a patent-pending algorithm called ‘backtrace’ to provide multiple explanations.
Similar cases: References as explanations
Training data is the base for model learning. Identifying similar cases from a model point of view can explain why/how the model has functioned for a prediction.
AryaXAI uses model explainability to figure out similar cases between the inference sample and training data to provide a list of similar cases. In-depth analysis is provided for users to map these data similarities, feature importance, and prediction similarities. References as explanations can be a powerful source of evidence in interpreting model behaviour
What-if: Run hypothetical scenarios right from GUI
Validate and build trust in the model by simulating input scenarios. Understand what features need to be changed to obtain desired predictions.
AryaXAI allows users to run multiple ‘What-if’ scenarios directly from GUI. The framework pings the model for response in real-time and shows the predictions to the users. This is not limited to advanced users, even business users can use it without any technical training or skills. Such ‘What-if’ scenarios can provide quick insights into the model to the users.
Develop safeguard controls
Perform in-depth audit around ML artecrafts
Understand the value of your ML models, uncover business risks and develop safeguard controls to avoid risks. Integrate fairness, explainability, privacy, and security assessments across workflows.
Perform in-depth audit around ML artefacts
For mission-critical use cases, ML auditing ensures systematic execution of processes to identify associated risks and develop safeguard controls to avoid risks. AryaXAI methodically captures various auditable artefacts during training and production.
Organizations can define audit cadences and execute them automatically in AryaXAI. Users can review these reports anytime and anywhere. The key observations can be shared between teams to inform all stakeholders about the gaps and execute the needful corrections. And when it is required to share critical information with regulators or compliance teams, AryaXAI helps aggregate critical information in a jiff.
- Which data is selected?
- The retionale behind the chice.
- Selection authority
- Detail about data removal
- Bais mitigation
- Trueness representaition
- Selection of retraining data
- Analysis od errors
- Case-wise true labels
- Technique Selection
- Validating explainability
- Review of gobal explanations
- Sufficency of explanations
Model challenger deifinition
Records of challenger performance
- Sufficiency of test scenarios
- Failure analysis
- Usage risk estimation
- Business risk
- Defining sucess criteria
- Simulated Usage
- Benefits estimation
- Sign off authorization
- Predictions record
- Event of data drift
- Events of model drift
- Records of biased predictions
- Records of model degradation
Simplify management and standardize governance
Preserve business and ethical interests by enforcing policies on AI
Enhance the applicability of the ML model with contextual policy implementation across multiple teams and stakeholders
Define responsible requirements across a variety of policies
Enhance the applicability of the ML model with contextual policy implementation across multiple teams and stakeholders.
Version controls to measure and manage AI governance, risk, and compliance
Streamline monitoring and management of policy changes with version controls. Track and retrace policy changes across roles.