Quickstart
Guides
Tutorials
< Home

Model performance monitoring

Basic concepts: 

  • Baseline: Users can define the baseline basis on ‘Tag’ or segment of data based on ‘date’
  • Frequency: Users can define how frequently they want to calculate the monitoring metrics
  • Alerts frequency: Users can configure how frequently they want to be notified about the alerts

Model performance dashboard

The model performance dashboard lets you can analyze your model's performance over time or between model versions. This analysis is displayed across various parameters for the predicted and actual performance.

Through GUI

The model performance report will display various metrics like accuracy, precision, recall and also quality metrics.

AryaXAI - Model performance monitoring

Alerts and Monitors

From here you can easily create and view customized alerts for Data drift, target drift and model performance through the alerts dashboard. For this, select ‘Create alerts' in the 'Monitors' tab and define the baseline and current data parameters like we did above and set the frequency of alerts, which can be daily, weekly, monthly, quarterly or yearly. 

To create new alerts, go to:

 ML Monitoring (Main menu on left) > select ‘Monitoring’ (from the sub-tabs) > click ‘Create Alerts’

All the newly created and existing alerts are displayed on this dashboard, along with details of trigger creator, name, type and options. 

Through SDK

To access the Model performance dashboard through SDK:


project.get_model_performance_dashboard()

You can use the help function to get all parameters and payloads for the Model performance dashboard


help(project.get_model_performance_dashboard)

To get the Model Performance of 'Active' Model through the AryaXAI SDK:


project.get_model_performance()

Model Performance Monitor

Through GUI

Be informed about your model performance proactively using 'monitors'.

AryaXAI - Setting Model performance monitors
  • Select model type: Classification/Regression.
  • Select model performance metrics: You can define any of the following performance metrics - accuracy, f1, Auc-roc, precision and recall.
  • Select the baseline and current: Use the tags to define the baseline and current. 'Current' is your production data if you are tracking drift in your production data.
  • Select predicted &true label: Map the appropriate feature for 'Baseline predicted/true label' & 'Current predicted/true label'.
  • Segmenting the baseline or current: You can use date features to further segment your baseline. You can also use 'Time period in days' to dynamically select the recent 'n' days as the current data. If you have added 'Time period in days', it'll use that value as the time period the day it calculated the drift as the end date.

Tip: If you deployed multiple model versions, you can append the model prediction in the same dataset as new features instead of creating duplicate dataset copies. You can use these to track the model performance.

Alert Report

The ‘Alert’ tab (beside the Monitoring sub-tab) displays the list of alerts that have been triggered. Clicking ‘View trigger info’ displays the Trigger details, such as the current data size, data drift triggered, drift percentage, etc.

Notifications

If there is an identified drift, you'll get the alert for the same in both the web app and email at the specified frequency.

Web app alerts: Any alert triggered will be displayed as a notification on the top right corner. You can view all notifications from the tab and clear them. 

AryaXAI: Notifications

Email Alerts: The admin of the workspace will get an email if there is an identified drift.

Through SDK

You can also use the SDK to create and manage monitoring triggers. To install the package, you can use the following functions:

To list all monitoring Triggers created:


# list monitoring triggers
project.monitoring_triggers()

You can also use the help function to create monitoring triggers for Data Drift, Target Drift, and Model Performance using payload


help(project.create_monitoring_trigger)

Additional functions:


# delete monitoring trigger
project.delete_monitoring_trigger('test trigger 5')

# Fetch details of executed Triggers
project.alerts()

Bias Monitoring

Bias monitoring plays a critical role in addressing and mitigating biases, ensuring that the system makes equitable and fair decisions across diverse groups of users or subjects.

Through GUI

To monitor your model for bias:

  • Select the Baseline tag, and the Baseline true and predicted labels
  • Select the feature to use from the dropdown
  • Select the Model type - Classification or Regression

Through SDK

You can also monitor bias in your models through the AryaXAI python package:


# bias monitoring dashboard
project.get_bias_monitoring_dashboard({
    "base_line_tag": ["eda"],
    "baseline_true_label": "charges",
    "baseline_pred_label": "charges",
    "model_type": "classification"
})

Help function to get Bias monitoring dashboard:


help(project.get_bias_monitoring_dashboard)

Page URL copied to clipboard