QUICK LINKS

GETTING STARTED

COMPONENTS

TUTORIALS

Introduction

Given the prevalence of bias and vulnerabilities in ML models, understanding a model's operations is crucial before its deployment to production. Generally, a model is said to demonstrate bias if its decisions unfairly impact a protected group without justifiable reasons.

AryaXAI's bias monitoring functionality detects potential bias in a model's output. The platform also provides analytical and reporting capabilities that can help determine whether the bias is justified.

Basic concepts: 

  • ‍Baseline: Users can define the baseline basis on ‘Tag’ or segment of data based on ‘date’.
  • ‍Frequency: Users can define how frequently they want to calculate the monitoring metrics
  • ‍Alerts frequency: Users can configure how frequently they want to be notified about the alerts

To monitor your model for bias with AryaXAI:

  • Select the Baseline tag 
  • Select the Baseline true and predicted labels
  • Select the model type and date feature name 
  • Select the feature to use from the dropdown

Create dashboard: Dashboard Logs

Any new dashboard created for Bias monitoring will be listed in the Dashboard Logs, where you can view details such as the baseline and associated tags, the creation date and name of the dashboard, the owner, etc. In the Actions column, you have options to expand or collapse the dashboard to view or hide detailed information, configure alerts based on the specific dashboard configuration, or delete the dashboard from the logs.

NOTE: The detailed dashboard in the Actions column will only be visible if the status shows as ‘Completed’. If the status shows ‘Failed’ the reason will be specified in the ‘Error’ column.

For all the logs listed here, users can configure automatic alerts based on the dashboard log. This option is available in the 'Alerts' column.