QUICK LINKS

GETTING STARTED

COMPONENTS

TUTORIALS

Bias Monitoring

Given the prevalence of bias and vulnerabilities in ML models, understanding a model's operations is crucial before its deployment to production. Generally, a model is said to demonstrate bias if its decisions unfairly impact a protected group without justifiable reasons.

AryaXAI's bias monitoring functionality detects potential bias in a model's output. The platform also provides analytical and reporting capabilities that can help determine whether the bias is justified.

Monitor bias in your models through the AryaXAI python package:


# bias monitoring dashboard
project.get_bias_monitoring_dashboard({
    "base_line_tag": ["XGBoost_default_testdata"],
    "baseline_true_label": "charges",
    "baseline_pred_label": "Predicted_value_AutoML",
    "model_type": "classification"
})

Help function to get Bias monitoring dashboard:


help(project.get_bias_monitoring_dashboard)