If you don’t have your own model, you can create a new one through Auto ML. .
Whenever a user creates a new project, a default AutoML model, ‘XGBoost_default, gets trained for default prediction and default explainability. In case a user wants to use his own model for explainability, the same can be done through:
Uploading own model OR
Training the model, for which inbuilt modelling techniques - Tree-based, probabilistic and linear, are present in AryaXAI, namely:
- XGBoost
- LGBoost
- CatBoost
- RandomForest
- SGD
- Logistic Regression
- GaussianNaiveBayes
You can fine tune these models and tune the hyper parameters.
Through GUI
Model upload
When uploading own models, users need to:
Access the 'Settings' section and proceed to the 'Model Upload' tab
Define the model name: Provide a name for your model.
Specify Model Architecture: Indicate whether the model is based on machine learning or deep learning (deep learning support is coming soon).
Specify Training and Testing Data: Provide the datasets to be used for training and testing the model.
Note: You can only upload a new model within an existing project where the initial data iteration has been uploaded. Ensure that all features used in your model are presented in the data you have uploaded ie; data points in the model and uploaded file need to be the same
Note: If the training and testing data are not provided, AryaXAI will automatically select a random sub-sample from the training data to use as testing data. This allows AryaXAI to benchmark your uploaded model against a subset of the training data.
To activate the new model for use, you must manually select it under 'Options' within the 'Model Versions' tab.
Model Version
Once your model is successfully trained, a comprehensive list of all versions is accessible and listed in the 'Model Versions' tab.
The model version dashboard provides users with a comprehensive list of all models, offering an overview and management capabilities. This list includes:
- Model name and status
- Creation and updation details
- Accuracy, precision, recall, and F score metrics
- Actions column, which allows users to activate or delete models.
Model Performance
The default performance of the model trained is tracked under the ‘Model Performance’ tab.
Train Model
To train a model:
Select the desired modelling technique and click ‘Train’
Set the Data Configuration to match the settings used during initial data upload. You will need to select the Training tags and Testing tags from the dropdown.
Select 'Save initial configuration' and 'Save Feature Encoding' for consistency and accuracy in the model training process
Customize the model parameters to tailor the training process according to specific requirements
Set the Explainability parameters. Select the Explainer Shape and set the data sample percentage
Select the server to run your remote environment on
After configuring data and model parameters, select 'Train model' to start the training process
Once the model is successfully trained, a comprehensive list of all versions is accessible and listed in the 'Model Versions' tab.
Note: A maximum of 10 models can be trained within a workspace. Within a project, only 2 models can be trained. (Considering workspace limitations, a maximum of 5 projects can be created.)
You need to activate the new model manually under ‘Options’ in the ‘Model Versions’ tab.
Upon activating a model, detailed information becomes available within the 'Model Info' section, providing a comprehensive overview of the model, which includes:
- Model name
- Model Type
- Model Params
- Data tags
- Modelling info, which shows the details used for training the model
Inferencing
For the (activated) model you have trained, if you want to derive inferences for any tag, AryaXAI offers the ‘Inferencing’ section. In this section, you can run predictions using the activated model on specific files or tags.
Post-inference execution, the results are stored as tags, which are listed in the 'Inferencing Files' section. The list displays essential details such as the model name, the creation date of the inference, and performance metrics like accuracy, recall, and precision, among others. You also have the option to download this data if needed.
The same tags generated from the inference process are accessible within the ML Monitoring section, providing a unified view of the inferences made by the model.
Through SDK
Additionally, you can also use the AryaXAI python package for tasks like training models, activating models for a project, model inferencing, or case info on projects with just a few lines of code.
Upload a Model:
project.upload_model()
Help function to upload a model:
help(project.upload_model)
Train Model:
To train a model and retrieve a list of all trained models within the project:
# train model
project.train_model(model_type='RandomForest')
# all trained models
project.models()
# Train model. This Trains Another version Model with Current config (If config not passed)
project.train_model()
# available models to train
project.available_models()
Help function to train a model:
# Help on method train_model
help(project.train_model)
Delete a trained model:
project.remove_model()
Inferencing:
The below function performs predictions on testing data using the XGBoost default model. You have the flexibility to pass any model you prefer or leave it blank to use the default model.
# model inference
project.model_inference(tag="Training",model_name="XGBoost_default")
# model_name optional default to active model for the project
#The inferencing results are stored as 'Testing_XGBoost_v1_Inference' tag.
project.all_tags()
# get quick overview of testresults data
testresults.head()
#This testresults data will have additional columns named Predicted_value_AutoML, Prediction_category_AutoML, pred_proba_AutoML
Help function on model inferencing
# Help on method model_inference
help(project.model_inference)
Additional functions:
# get Active model details
modelinfo = project.model_summary()
# set model active for project
project.activate_model('model_name')
# Model Performance of Active Model
project.get_model_performance()
# get current data config details. If you don't modify the data settings, any future fine-tuning will use the same data settings
modelinfo.data_config()
# remove model
project.remove_model('model_name')
# project Cases
project.cases() # #last 20 case list
project.cases(unique_identifier='A11')
# project Case Info
case=project.case_info(unique_identifier='A11',tag='training')
Following the model's training phase, before synthetic data generation, AryaXAI offers a feature called 'Prompting' that allows you to establish specific conditions for the data generation process.
Through GUI
To create a new prompt, go to the 'Prompting' tab in Synthetic AI. Select the 'Create Prompt' button on the right.
Navigate to the 'Prompting' tab within Synthetic AI and click the 'Create Prompt' button located on the right.
Fill in the Prompt name and specify features while setting conditional operators.
Add the Feature value as required, then save the prompt.
The created prompt will appear in the Prompting tab, displaying its name, creation and update details, and status. Here, you can deactivate or delete the prompt. An 'Active' status indicates that the conditions specified in the prompt will be applied during the generation of new synthetic data.
Through SDK
List existing prompts
project.get_synthetic_prompts()
Create Synthetic Prompts
project.create_synthetic_prompt(
name='Grade A synths',
expression='(grade = A)'
)
Following model training, the 'Synthetic Models' tab displays the trained models along with their respective status. This comprehensive list provides key information such as the model's Name, creator, creation date, overall quality score, Column shapes, and Column pair trends.
In the 'Options' column within the list, selecting 'Show' unveils additional model details, including:
- Synthetic Data Quality
- Training: Detailed training logs and associated data tags. If the model training fails, the log will provide reasons for the failure.
- Synthetic Data generation
- Anonymity test
Clicking on any of these sections reveals further details. Using the saved model, you can generate additional synthetic data.
Through SDK
To generate data and analyze the synthetic model quality via SDK:
# select model you want
project.synthetic_model(model_name='CTGAN_v14')
model = project.synthetic_model(model_name='CTGAN_v1')
model.get_data_quality()
Synthetic Data
Through GUI
In the 'Synthetic Data' tab, you'll find the initial data generated post-model training. This list showcases the data's creation date and time, along with the following details:
- Overall quality score: Represents the mean of Column Shapes and Column pair trends, providing an overview of data quality. - Column Shapes: Indicates the similarity between uploaded and synthetic data for individual columns. A higher score implies closer resemblance. A score of 1 signifies significant divergence, while a score between 5-7 suggests considerable similarity. - Column pair trends: Reflects similarity between uploaded and synthetic data for pairs of columns. - The PSI plot graph visualizes data distribution congruence, followed by the count of rows and features used in generating the synthetic data.
Synthetic data quality
This section displays gauge charts on Overall quality score, Column shapes, and Column pair trends and the data stability graph.
Training
This section displays detailed training logs and associated data tags. If the model training fails, the log will provide reasons for the failure.
Synthetic Data generation
To generate additional synthetic data rows:
Visit the 'Synthetic Models' tab and choose 'Show' for your preferred model.
Scroll down to locate 'Synthetic Data generation' under 'Training.'
Specify the number of 'Synthetic Rows' required and click 'Generate.'
AryaXAI will store the newly generated data in the 'Synthetic Data' tab, identified under the same naming convention with '_1.'
Anonymity test
The Anonymeter is a sophisticated statistical system designed to evaluate privacy risks in synthetic tabular datasets. It includes evaluators that assess the probability of identifying individuals, linking data, and making inferences. These evaluations are crucial for identifying potential risks to data donors after publishing a synthetic dataset.
To perform an Anonymity test on your data, select the Auxiliary columns in the 'Aux Columns' dropdown to compare data values. Choose tags from the 'Control tags' dropdown that were not utilized during training and click 'Generate Anonymity score.'
Upon successful execution, the screen displays the metric values associated with Privacy Evaluation. AryaXAI measures this on four metrics:
Univariate: Looks at individual variables in isolation
Multivariate: Considers the combined effect or correlation among various attributes
Linkability: Focuses on assessing the risk of connecting or linking sensitive information across different datasets or sources
Inference: Involves deducing or predicting sensitive details by analyzing patterns, correlations, or statistical relationships present within the data
Through SDK
To generate Synthetic Data via SDK:
model.generate_synthetic_datapoints(1000)
To get the Population Stability Index (psi) plot for synthetic model data via SDK:
model.plot_psi()
To fetch existing anonymity scores for model synthetic data:
To generate synthetic data in AryaXAI, the initial step involves training a 'Synthetic model.' This model creates initial data based on the uploaded training data. After assessing and approving the quality of this generated data, users can proceed to produce additional synthetic data.
Through GUI
To begin, the model needs training. Follow these steps:
Navigate to the 'Synthetic AI' tab in the Main menu (on the left).
Switch to the 'Train Model' tab at the top.
Select your desired model and click 'Train' to commence the training process. You will be redirected to 'Data Configuration' for customization, where you can set the 'Initial Configuration' and 'Model Parameters.'
In the 'Data Configuration' section:
Under 'Initial Configuration,' select the relevant data tag from the dropdown for creating synthetic data. Exclude specific features if needed and click 'Save initial configuration.'
Proceed to 'Model Parameters' and input details such as Batch size, Early stopping patience, Early stopping threshold, Epochs, Model type, Random state, and Tabular config
Select the custom server: Choose a machine to run your remote environment on, and select 'Train Model'
After completing the above steps, await the 'Model Training complete' notification.
Through SDK
Help function method train_synthetic_model:
help(project.train_synthetic_model)
Define parameters for your synthetic model:
data_config = {
"tags": ["Training"],
"feature_include": feature_include # data used for training/generating synthetic data
}
hyper_params = {
"epochs": 2, # epochs are no of iteration of data into model (more the better, but longer) # Max 100 supported
"test_ratio": 0.2 # Data used for training/generating synthetic data. how much to keep aside for testing
}
project.train_synthetic_model(
model_name='CTGAN', # CTGAN / GPT2 , models are avaialable
data_config=data_config,
hyper_params=hyper_params
)
To fetch trained models:
project.synthetic_models()
Retrieve available synthetic custom servers provided by AryaXAI library:
Synthetic data refers to computer-generated information used to enhance or substitute real data, serving to refine AI models, safeguard sensitive information, and address bias concerns.
In today's data-driven landscape, synthetic data has become essential for testing and training AI models. It offers cost-effective production, automatic labeling, and circumvents logistical, ethical, and privacy challenges associated with using real-world data for training deep learning models.
While synthetic data serves as a valuable technique for model alignment, its effectiveness depends on the quality of the generated datasets. AryaXAI provides advanced 'Synthetic AI' techniques such as GPT-2 and CTGAN, enabling the creation of high-quality synthetic datasets.
To know more about synthetic AI functionality in AryaXAI, refer:
After conducting stress testing, users may uncover various scenarios where models fail, posing significant business continuity risks. Additionally, each business typically has specific guidelines they wish to enforce on models. In AryaXAI, these guidelines are defined as 'Policies'.
Policies serve as rules or guidelines that can override model predictions. The framework implements these policies on the models, adhering to the instructions provided by the user.
Creating policies: GUI
To create a new policy in AryaXAI:
Navigate to 'Policies' in the main menu on the left and select 'Create New Policy'.
Define the policy and specify the feature (data point) to which the policy applies. Choose the conditional operators (e.g., not equal to, equal to, greater than, less than) and set the current expression.
Add the policy statement, select the decision input, specify the desired decision value for the final prediction, and click 'Save'.
All created policies are showcased on the Policy dashboard, where you can easily Activate/Deactivate, edit, or delete them.
When viewing cases, access the 'Policies' tab (ML Explainability > View cases > 'view' under the Options column) to review the policy details for the particular case.
Here, ‘Model Prediction’ is the original model prediction and ‘Final prediction’ is the overridden prediction based on the custom rules defined.
Creating policies: SDK
project.create_policy()
Help function to create a new policy:
help(project.create_policy)
Additional functions:
#View policies for project
project.policies()
#Delete Policy
project.delete_policy()
Policy Trail
The ‘Policy Trail’ operates similarly to the ‘Observations Trail’ but is tailored specifically for policies. It logs events such as policy creation, modifications, updates, and other relevant actions within the policy management system.
This feature facilitates tracking the evolution of policies, comprehending the sequence of modifications, and identifying the individuals responsible for specific changes and their timing. It proves valuable for compliance, auditing, troubleshooting, and ensuring transparency and accountability within the system.
If you don’t have your own model, you can create a new one through Auto ML. .
Whenever a user creates a new project, a default AutoML model, ‘XGBoost_default, gets trained for default prediction and default explainability. In case a user wants to use his own model for explainability, the same can be done through:
Uploading own model OR
Training the model, for which inbuilt modelling techniques - Tree-based, probabilistic and linear, are present in AryaXAI, namely:
- XGBoost
- LGBoost
- CatBoost
- RandomForest
- SGD
- Logistic Regression
- GaussianNaiveBayes
You can fine tune these models and tune the hyper parameters.
Through GUI
Model upload
When uploading own models, users need to:
Access the 'Settings' section and proceed to the 'Model Upload' tab
Define the model name: Provide a name for your model.
Specify Model Architecture: Indicate whether the model is based on machine learning or deep learning (deep learning support is coming soon).
Specify Training and Testing Data: Provide the datasets to be used for training and testing the model.
Note: You can only upload a new model within an existing project where the initial data iteration has been uploaded. Ensure that all features used in your model are presented in the data you have uploaded ie; data points in the model and uploaded file need to be the same
Note: If the training and testing data are not provided, AryaXAI will automatically select a random sub-sample from the training data to use as testing data. This allows AryaXAI to benchmark your uploaded model against a subset of the training data.
To activate the new model for use, you must manually select it under 'Options' within the 'Model Versions' tab.
Model Version
Once your model is successfully trained, a comprehensive list of all versions is accessible and listed in the 'Model Versions' tab.
The model version dashboard provides users with a comprehensive list of all models, offering an overview and management capabilities. This list includes:
- Model name and status
- Creation and updation details
- Accuracy, precision, recall, and F score metrics
- Actions column, which allows users to activate or delete models.
Model Performance
The default performance of the model trained is tracked under the ‘Model Performance’ tab.
Train Model
To train a model:
Select the desired modelling technique and click ‘Train’
Set the Data Configuration to match the settings used during initial data upload. You will need to select the Training tags and Testing tags from the dropdown.
Select 'Save initial configuration' and 'Save Feature Encoding' for consistency and accuracy in the model training process
Customize the model parameters to tailor the training process according to specific requirements
Set the Explainability parameters. Select the Explainer Shape and set the data sample percentage
Select the server to run your remote environment on
After configuring data and model parameters, select 'Train model' to start the training process
Once the model is successfully trained, a comprehensive list of all versions is accessible and listed in the 'Model Versions' tab.
Note: A maximum of 10 models can be trained within a workspace. Within a project, only 2 models can be trained. (Considering workspace limitations, a maximum of 5 projects can be created.)
You need to activate the new model manually under ‘Options’ in the ‘Model Versions’ tab.
Upon activating a model, detailed information becomes available within the 'Model Info' section, providing a comprehensive overview of the model, which includes:
- Model name
- Model Type
- Model Params
- Data tags
- Modelling info, which shows the details used for training the model
Inferencing
For the (activated) model you have trained, if you want to derive inferences for any tag, AryaXAI offers the ‘Inferencing’ section. In this section, you can run predictions using the activated model on specific files or tags.
Post-inference execution, the results are stored as tags, which are listed in the 'Inferencing Files' section. The list displays essential details such as the model name, the creation date of the inference, and performance metrics like accuracy, recall, and precision, among others. You also have the option to download this data if needed.
The same tags generated from the inference process are accessible within the ML Monitoring section, providing a unified view of the inferences made by the model.
Through SDK
Additionally, you can also use the AryaXAI python package for tasks like training models, activating models for a project, model inferencing, or case info on projects with just a few lines of code.
Upload a Model:
project.upload_model()
Help function to upload a model:
help(project.upload_model)
Train Model:
To train a model and retrieve a list of all trained models within the project:
# train model
project.train_model(model_type='RandomForest')
# all trained models
project.models()
# Train model. This Trains Another version Model with Current config (If config not passed)
project.train_model()
# available models to train
project.available_models()
Help function to train a model:
# Help on method train_model
help(project.train_model)
Delete a trained model:
project.remove_model()
Inferencing:
The below function performs predictions on testing data using the XGBoost default model. You have the flexibility to pass any model you prefer or leave it blank to use the default model.
# model inference
project.model_inference(tag="Training",model_name="XGBoost_default")
# model_name optional default to active model for the project
#The inferencing results are stored as 'Testing_XGBoost_v1_Inference' tag.
project.all_tags()
# get quick overview of testresults data
testresults.head()
#This testresults data will have additional columns named Predicted_value_AutoML, Prediction_category_AutoML, pred_proba_AutoML
Help function on model inferencing
# Help on method model_inference
help(project.model_inference)
Additional functions:
# get Active model details
modelinfo = project.model_summary()
# set model active for project
project.activate_model('model_name')
# Model Performance of Active Model
project.get_model_performance()
# get current data config details. If you don't modify the data settings, any future fine-tuning will use the same data settings
modelinfo.data_config()
# remove model
project.remove_model('model_name')
# project Cases
project.cases() # #last 20 case list
project.cases(unique_identifier='A11')
# project Case Info
case=project.case_info(unique_identifier='A11',tag='training')
This section displays the global feature importance. The global feature importance is the aggregation of features and feature importance across all the baseline data.
Through SDK
To get the Global Feature Importance of Current active Model
The "Observations" section serves as a powerful tool for assessing the correlation between industry knowledge and model performance. It enables subject matter experts to contribute to the explainability process by providing clear and understandable explanations to all stakeholders.
In this section, users can explore the rationale behind each prediction. By defining specific conditions or causes as "observations," users can establish a correlation between these factors and the model's predictions. This functionality facilitates a deeper understanding of causation correlations within the model's decision-making process.
Creating/ Editing Observations:
Through GUI
Go to ML Explainability > Observations.
To create a new observation, select the ‘Create observation’ button on the right.
Assign a name to the observation and utilize the drag-and-drop feature to add the expression node. Next, specify the feature (data point) for which you intend to create the observation. Choose from conditional operators such as 'not equal to,' 'equal to,' 'greater than,' or 'less than,' and input the desired feature value.
Once the operation is defined, select the linked features from the dropdown menu on the left. You can select multiple features here and write an observation statement.
View observations:
Once saved, all observations are listed in the observations tab with creation and updation details. The advanced view option under ‘Options’ column in the list provides additional details, such as who updated the observation, observation statement, linked features and the expression.
If any of the observations hold true for a case, it will be displayed below the case. This can be viewed at ML explainability > View cases > ‘View’ under the ‘Options’ column in the summary table.
Selecting the ‘Advanced view’ option provides additional details on the observations. The ‘Success’ column here displays whether the particular observation is running on the. ‘Triggered’ will show if the observation is relevant to the current case.
Observations score:
observation score is the sum of feature importance of linked features.
Observation trail
In the Observations section, all changes made to observations are systematically logged and can be accessed through the Observations Trail.
This section presents a structured table containing essential details such as the initial creation date, subsequent updates, their corresponding dates and times, and the current status of each observation.
Furthermore, the 'Options' feature within the table provides a 'Show' functionality, allowing users to access both the Current and Old Configuration data. This feature offers detailed insights into the modifications, including the user responsible for the update, the specific statement that underwent changes, the features impacted by the modification, and the exact alterations made. This comprehensive display facilitates a thorough examination of the modification history, ensuring transparency and accountability in the observation tracking process.
Through SDK:
To create an observation
project.create_observation()
To view observations executed for Case:
case_info.explainability_observations()
To make Observation Active, Inactive and change params
This section displays a comprehensive list of all cases within the uploaded data. Users have the ability to filter cases based on various criteria such as Unique Identifier, start and end date, and training or testing tags.
For detailed analysis of each case, users can utilize the 'View' option, which provides a complete overview of the parameters utilized by AryaXAI for explainability.
The case view displays the following tabs:
- Explainability
- Prediction path
- Raw data
- Similar cases
- Policies
Note: When defining data features, specifically the data settings, it should be noted that these become the base for explainable model training. The feature selection that is done here should align with the final features that have been used in the model.
Explainability
This tab showcases local features and the feature importance plot. Users can observe all features in the data that positively and negatively influence the prediction. Additionally, users can adjust the number of features displayed in the plot using the provided bar on the left side of the plot.
Prediction Path
The Prediction path tab displays the path followed by the Tree-based models like XGBoost or LGBoost for a particular prediction. It represents the route taken through the decision trees, showcasing which features were evaluated and the decisions made at each node until the sample reaches a leaf and a prediction is generated.
Raw and Engineered Data
In AryaXAI, there is a convenient feature that allows users to segregate raw data and engineered data using a slide button. This feature provides an easy way to switch between viewing and working with the original raw data and the processed engineered data, since your model is trained on the engineered data.
Similar cases
'Similar cases,' also known as reference explanations, parallels the concept of citing references for a prediction. This method extracts the most similar cases from the training data compared to the 'prediction case.'
This tab showcases similar cases from past data where the prediction was either similar or nearly identical. The features are visualized in a graph, allowing for filtering based on data features. From the 'Features' dropdown, select the features you want to plot in the chart.
Below the graph, all similar cases are listed, and filtering options based on feature name are available. Users can also view details of any listed similar case through the 'view' option.
Policies
After conducting stress testing, users may uncover various scenarios where models fail, posing significant business continuity risks. Additionally, each business typically has specific guidelines they wish to enforce on models. In AryaXAI, these guidelines are defined as 'Policies'.
Policies serve as rules or guidelines that can override model predictions. The framework implements these policies on the models, adhering to the instructions provided by the user.
To create a new policy in AryaXAI:
Navigate to 'Policies' in the main menu on the left and select 'Create New Policy'.
Define the policy and specify the feature (data point) to which the policy applies. Choose the conditional operators (e.g., not equal to, equal to, greater than, less than) and set the current expression.
Add the policy statement, select the decision input, specify the desired decision value for the final prediction, and click 'Save'.
All created policies are showcased on the Policy dashboard, where you can easily Activate/Deactivate, edit, or delete them.
When viewing cases, access the 'Policies' tab (ML Explainability > View cases > 'view' under the Options column) to review the policy details for the particular case.
Here, ‘Model Prediction’ is the original model prediction and ‘Final prediction’ is the overridden prediction based on the custom rules defined.
Through SDK
The default active model is 'XGBoost_default', which is the AryaXai Surrogate Model. To view all available models and set a different model as active, use the commands mentioned below.
Case explainability
project.models() # list all available model
project.activate_model('model_name') # make any model active
Get Information of active model:
project.cases(tag='Training')
Get all cases which are already viewed
project.case_logs(page=1)
# get viewed case
case = project.get_viewed_case(case_id="")
To fetch Explainability for a case. This will use the current 'active' model.
case_info = project.case_info('unique_identifer','tag')
# Case Decision
case_info.explainability_decision()
Help function on method case info:
help(project.case_info)
Note: If you change the active model, then the prediction and explanability will change as well.
Prediction Path
To fetch Case Prediction Path:
case_info.explainability_prediction_path()
Raw and Engineered Data
To fetch Raw Data of all features for a particular case via SDK, use the following prompt:
case.explainability_raw_data()
Similar cases
To list all Similar Cases wrt a particular case, and get data of the similar cases via SDK:
# List of Similar Cases wrt to a Case
case_info.similar_cases()
# Data of Similar Cases
case_info.explainability_similar_cases()
Policies
To create a new policy:
project.create_policy()
You can also use the following Help function to create a new policy:
help(project.create_policy)
View or delete policies:
# Policies which are executed for Case
case_info.explainability_policies()
#View policies for project
project.policies()
#Delete Policy
project.delete_policy()
Retraining the XAI model
To retrain the explainability model, you can simply modify the data settings. This can be done by selecting the ‘Update config’ option present in ‘Data settings’. Whenever the settings are modified, the explainability model is retrained again. XAI model can be retrained as many times as needed to achieve the best correlation between model prediction and the model functioning.
AryaXAI's ML Explainability toolkit allows you to easily explain your models using multiple methods, such as highlighting feature impact, observations, and similar cases.
The sophistication and complexity of AI systems have evolved to the extent that they are difficult for humans to comprehend. Understanding the model's decision-making process and identifying any biases is crucial from both a regulatory and model-building perspective. Business users require a clear understanding of the model's operations and validation before using it in production.
Ensure your models work as you truly intend with AryaXAI.
AryaXAI offers multiple methods for XAI:
- Feature importance using 'Backtrace': For Deep Learning (Local & Global)
- Feature importance using 'SHAPE'. (Global & Local)
- Decisioning path visualization (for tree based models) (Global & Local)
- Observations as explanations (Local)
- Similar cases (Local)
Feature importance
Feature importance is one of the standard ways used in Machine learning (ML) explainability to understand the contribution of each input feature in making predictions. It provides a very high-level overview of how the features are used by the model to arrive at the prediction output and also debug the issues in the model.
By examining feature importance, users can identify which variables are most influential in the model's decision-making process. This insight aids in understanding the model's behavior, identifying potential biases, and improving overall interpretability and trustworthiness.
AryaXAI offers feature importance analysis at two levels:
- Global Feature importance: This assessment evaluates the significance of each feature across an entire dataset or project. It provides a comprehensive understanding of how various features contribute to the model's predictions or outcomes on a broader scale.
- Local explanations (At case-level): Focuses on evaluating the significance of features for individual predictions or instances within the dataset. It provides insights into how specific features contribute to the model's decision-making for each case or prediction outcome. This analysis is more granular, identifying the relative importance of features for particular instances, allowing for the understanding of how the model utilizes different features to arrive at predictions on a case-by-case basis.
AryaXAI uses the feature defined in the data settings and creates an XAI model to derive feature importance. In the developer version, the first uploaded file serves as the training data for building the XAI model. However, in the AryaXAI enterprise version, the model can be directly utilized to construct the XAI model.
The Monitors tab lets you easily set custom alerts to track your model's health and performance over time, ensuring your models are on track. Once configured, AryaXAI can notify users when it detects identified drift in the data, enabling proactive intervention to maintain model accuracy and reliability. Users can also choose a custom compute required for the alert.
Through GUI
You can set monitors to detect data and target drift or model performance degradation. To create new monitors:
ML Monitoring (Main menu on the left) > select ‘Monitors’ (from the sub-tabs) > click ‘Create Monitor’
Assign a name for the monitor and choose the desired monitor type. Select the email list to which you want the alerts to be sent.
Specify the subsequent details, such as the baseline and current true label, tags, features, etc. Utilize tags to define the baseline data.
Utilize the date feature to further segment your baseline and set the monitoring frequency.
Note: The email list will only show the email addresses of the users who are added in the organization
All created monitors are displayed in a comprehensive list featuring details such as the monitor owner, creation date and time, monitor name and type, and options to manage the monitor.
The ‘View alert config’ option further displays detailed monitor info.
Note: The respective sections in ML Monitoring provide detailed steps for creating monitors for target drift, data drift, and model performance.
Alerts
The 'Alert' tab, located beside the Monitoring sub-tab, provides a comprehensive list of triggered alerts. Monitors that have been previously set up will appear as alerts once triggered. These alerts are displayed both within the web application, appearing as notifications in the top-right corner, and via email at the specified frequency.
AryaXAI alerts enable users to get detailed root cause analysis of triggered alerts and pinpoint factors contributing to model degradation. Users can set up alerts to detect data and target drift, performance degradation, anomalies, etc.
Through SDK
Create and manage monitoring triggers through the AryaXAI Python SDK. To install the package, you can use the following functions:
List all monitoring Triggers created:
# list monitoring triggers
project.monitoring_triggers()
You can also use the help function to create monitoring triggers for Data Drift, Target Drift, and Model Performance using payload
help(project.create_monitoring_trigger)
To delete the monitoring trigger or fetch details of executed triggers:
The model performance dashboard enables you to analyze your model's performance either over time or between different model versions. This analysis provides insights across various parameters, comparing predicted and actual performance.
Basic concepts:
Baseline: Users can define the baseline basis on ‘Tag’ or segment of data based on ‘date’
Frequency: Users can define how frequently they want to calculate the monitoring metrics
Alerts frequency: Users can configure how frequently they want to be notified about the alerts
Through GUI
Model performance dashboard
The model performance dashboard lets you can analyze your model's performance over time or between model versions. This analysis is displayed across various parameters for the predicted and actual performance.
The model performance report will display various metrics like accuracy, precision, recall and also quality metrics.
Dashboard Logs
Any new dashboard created for Model performance monitor will be listed in the Dashboard Logs, where you can view details such as the baseline and associated tags, the creation date and name of the dashboard, the owner, etc. In the Actions column, you have options to expand or collapse the dashboard to view or hide detailed information, configure alerts based on the specific dashboard configuration, or delete the dashboard from the logs.
Note: The detailed dashboard in the Actions column will only be visible if the status shows as ‘Completed’. If the status shows ‘Failed’ the reason will be specified in the ‘Error’ column.
For all the logs listed here, users can configure automatic alerts based on the dashboard log. This option is available in the 'Alerts' column.
Model Performance Monitor
With AryaXAI, proactively identify issues with the performance of your models post-deployment by using 'Monitors'.
To create a model performance monitor:
Navigate to the 'Monitors’ tab in ML Monitoring and select 'Create Monitor’.
Assign a name for the monitor and choose ‘Model performance' as the monitor type. Select the email list to which you want the alerts to be sent.
Choose the Model type and performance metric and set the model performance threshold. For the performance metric, you can define any of the following performance metrics: accuracy, f1, Auc-roc, precision, and recall.
From the dropdown, select the baseline true and predicted label. Utilize tags to define the baseline data
Utilize the date feature to further segment your baseline and set the monitoring frequency.
Tip: When deploying multiple model versions, consider appending the model predictions directly into the same dataset as new features, rather than creating duplicate copies of the dataset. This approach enables efficient tracking of model performance over time.
Alerts and Monitors
From here you can easily create and view customized alerts for Data drift, target drift and model performance through the alerts dashboard. For this, select ‘Create alerts' in the 'Monitors' tab and define the baseline and current data parameters like we did above and set the frequency of alerts, which can be daily, weekly, monthly, quarterly or yearly.
To create new alerts, go to:
ML Monitoring (Main menu on left) > select ‘Monitoring’ (from the sub-tabs) > click ‘Create Alerts’
All the newly created and existing alerts are displayed on this dashboard, along with details of trigger creator, name, type and options.
Through SDK
To access the Model performance dashboard through SDK:
project.get_model_performance_dashboard()
You can use the help function to get all parameters and payloads for the Model performance dashboard
help(project.get_model_performance_dashboard)
To get the Model Performance of 'Active' Model through the AryaXAI SDK:
Given the prevalence of bias and vulnerabilities in ML models, understanding a model's operations is crucial before its deployment to production. Generally, a model is said to demonstrate bias if its decisions unfairly impact a protected group without justifiable reasons.
AryaXAI's bias monitoring functionality detects potential bias in a model's output. The platform also provides analytical and reporting capabilities that can help determine whether the bias is justified.
Through GUI
To monitor your model for bias with AryaXAI:
Select the Baseline tag
Select the Baseline true and predicted labels
Select the model type and date feature name
Select the feature to use from the dropdown
Dashboard Logs
Any new dashboard created for Bias monitoring will be listed in the Dashboard Logs, where you can view details such as the baseline and associated tags, the creation date and name of the dashboard, the owner, etc. In the Actions column, you have options to expand or collapse the dashboard to view or hide detailed information, configure alerts based on the specific dashboard configuration, or delete the dashboard from the logs.
Note: The detailed dashboard in the Actions column will only be visible if the status shows as ‘Completed’. If the status shows ‘Failed’ the reason will be specified in the ‘Error’ column.
For all the logs listed here, users can configure automatic alerts based on the dashboard log. This option is available in the 'Alerts' column.
Through SDK
Monitor bias in your models through the AryaXAI python package:
Target drift refers to changes in the distribution of the target variable over time, which can affect the performance of machine learning models.
Basic concepts:
Baseline: Users can define the baseline basis on ‘Tag’ or segment of data based on ‘date’.
Frequency: Users can define how frequently they want to calculate the monitoring metrics
Alerts frequency: Users can configure how frequently they want to be notified about the alerts
Through GUI
Target Drift dashboard creation shares similar inputs with 'Data Drift'. However, unlike 'Data Drift', you must specify the Baseline and Current data parameters, along with the True label, predicted label, and the model type. You also need to select the compute option based on the specific server requirements for the data drift monitor.
Drift Metrics: These statistical tests are available to analyze data drifts, namely the Chi-square test, Jensen-Shannon distance, Kolmogorov-Smirnov (K-S) test, Population Stability index (Psi), and Z-test.
Available Stat tests:
You can learn more about these tests in our wiki section.
Selecting dates: If you are selecting the dates, then the entire data under that tag will be used for calculating the drift. When you are selecting the date variable, ensure that there is data within these dates.
Mixing multiple tags: If you want to merge data from different tags, you can simply select multiple tags in the segment(baseline/current).
Alerts and Monitors
From here you can easily create and view customized alerts for Data drift, target drift and model performance through the alerts dashboard. For this, select ‘Create alerts' in the 'Monitors' tab and define the baseline and current data parameters like we did above and set the frequency of alerts, which can be daily, weekly, monthly, quarterly or yearly.
To create new alerts, go to:
ML Monitoring (Main menu on left) > select ‘Monitoring’ (from the sub-tabs) > click ‘Create Alerts’
All the newly created and existing alerts are displayed on this dashboard, along with details of trigger creator, name, type and options.
Target Drift Monitor
Within AryaXAI, you can easily track target drifts and receive notifications upon detecting any identified drift in your target data. To create a target drift monitor:
Navigate to the 'Monitors’ tab in ML Monitoring and select 'Create Monitor’.
Assign a name for the monitor and choose Target drift' as the monitor type. Select the email list to which you want the alerts to be sent.
You can select the compute option based on the specific server requirements for the target drift task
Choose the statistical test and set thresholds
From the dropdown, select the baseline and current true label. Utilize tags to define the baseline and current data 'Current' typically represents production data for tracking drift in your production environment.
Utilize the date features to further segment your baseline and set the monitoring frequency.
In the event of identified drift, alerts are generated and delivered through both the web application and email based on the specified frequency.
Dashboard Logs
Any new dashboard created for target drift analysis will be listed in the Dashboard Logs, where you can view details such as the baseline and associated tags, the creation date and name of the dashboard, the owner, baseline and true label, model type and the statistical test used for detecting target drift.
In the Actions column, you have options to expand or collapse the dashboard to view or hide detailed information, configure alerts based on the specific dashboard configuration, or delete the dashboard from the logs.
Note: The detailed dashboard in the Actions column will only be visible if the status shows as ‘Completed’. If the status shows ‘Failed’ the reason will be specified in the ‘Error’ column.
For all the logs listed here, users can configure automatic alerts based on the dashboard log. This option is available in the 'Alerts' column.
Alerts
Any triggered alert is promptly displayed as a notification in the top-right corner of the web application interface. All notifications can be accessed and cleared from the dedicated tab.
Navigate to the 'Alert' tab adjacent to the 'Monitoring' sub-tab to access a list of triggered alerts. Clicking 'View trigger info' provides detailed insights into the trigger, including current data size, triggered data drift, drift percentage, and more.
Notifications
If there is an identified drift, you'll get the alert for the same in both the web app and email at the specified frequency.
Web app alerts: Any alert triggered will be displayed as a notification on the top right corner. You can view all notifications from the tab and clear them.
Email Alerts: The admin of the workspace will get an email if there is an identified drift.
Through SDK
To fetch the default target drift dashboard, use the following command:
project.get_target_drift_dashboard()
If you need to create a new dashboard:
project.get_target_drift_dashboard(payload)
You can use the help function to get all parameters and payloads
Data drift occurs when the characteristics of the data encountered by a model in production deviate significantly from those of the dataset on which the model was trained.
Basic concepts:
- Baseline: Users can define the baseline basis on ‘Tag’ or segment of data based on ‘date’.
- Frequency: Users can define how frequently they want to calculate the monitoring metrics
- Alerts frequency: Users can configure how frequently they want to be notified about the alerts
Through GUI
To set up a data drift dashboard in AryaXAI, define the baseline and current tags, date feature name, baseline and current dates, the feature to be analyzed, statistical method for drift calculation, and customize thresholds if necessary. You can select the compute option based on the specific server requirements for the data drift task.
Your dashboard will then be generated, and you have the flexibility to create or modify it as needed without limitations on the number of times.
Drift Metrics:
AryaXAI offers various statistical tests to analyze data drift. Available Stat. tests:
You can learn more about these tests in our wiki section.
Selecting Dates: When choosing dates, the entire dataset under the specified tag will be utilized for drift calculation. Ensure that relevant data is available within the selected date range.
Mixing Multiple Tags: To merge data from different tags, you can easily select multiple tags within the segment (baseline/current).
Tip: To focus on drift analysis for a specific feature, simply select that feature under 'Features to select' and proceed to calculate the drift. This allows for targeted analysis and insights into individual feature behavior over time.
Data Drift Monitors:
Within AryaXAI, you have the capability to track data drifts and receive notifications upon detecting any identified drift in your data. To create a Data Drift monitor:
Navigate to the 'Monitors’ tab in ML Monitoring and select 'Create Monitor’.
Assign a name for the monitor and choose 'Data drift' as the monitor type. Select the email list to which you want the alerts to be sent.
Select the compute option based on the specific server requirements for the data drift monitor
Choose the statistical test and set thresholds for data drift, dataset drift, and data drift feature percentage.
Utilize tags to define the baseline and current data. 'Current' typically represents production data for tracking drift in your production environment.
Select features to monitor. You can either create one monitor to track all features in your data or specify individual features for drift tracking.
Utilize the date features to further segment your baseline and set the monitoring frequency.
You can also specify the frequency for calculating drift. the frequency can be either hours, or on daily, weekly, monthly and even quarterly basis.
Dashboard Logs
Any new dashboard created for data drift analysis will be listed in the Dashboard Logs, where you can view details such as the baseline and associated tags, the creation date and name of the dashboard, the owner, and the statistical test used for detecting data drift. In the Actions column, you have options to expand or collapse the dashboard to view or hide detailed information, configure alerts based on the specific dashboard configuration, or delete the dashboard from the logs.
Note: The detailed dashboard in the Actions column will only be visible if the status shows as ‘Completed’. If the status shows ‘Failed’ the reason will be specified in the ‘Error’ column.
For all the logs listed here, users can configure automatic alerts based on the dashboard log. This option is available in the 'Alerts' column.
Alerts
In the event of identified drift, alerts are generated and delivered through both the web application and email based on the specified frequency.
Any triggered alert is promptly displayed as a notification in the top-right corner of the web application interface. All notifications can be accessed and cleared from the dedicated tab.
Navigate to the 'Alert' tab adjacent to the 'Monitoring' sub-tab to access a list of triggered alerts. Clicking 'View trigger info' provides detailed insights into the trigger, including current data size, triggered data drift, drift percentage, and more.
Notifications
If there is an identified drift, you'll get the alert for the same in both the web app and email at the specified frequency.
Web app alerts: Any alert triggered will be displayed as a notification on the top right corner. You can view all notifications from the tab and clear them.
Email Alerts: The admin of the workspace will get an email if there is an identified drift.
Through SDK
You can also set up Data drift monitoring and diagnosis through AryaXAI Python SDK. To fetch the default dashboard, you don't need to pass any payload, but to create a new one, you need to pass the following parameters:
In Machine Learning, it's not enough to test our models thoroughly after training them. When these models start working in real-world situations, they deal with new data that they weren't trained on, and their performance inevitably deteriorates over time.
The significance of ML monitoring lies in ensuring the accuracy and consistency necessary for successful machine learning implementation. Model monitoring serves to identify issues such as data drift, negative feedback loops, and model inaccuracy, among others.
Monitoring is a way to track the performance of the model in production. This makes each version of your machine learning model more precise than the previous version, thus, delivering the best results.
With AryaXAI, you can continuously evaluate your models to maintain their accuracy and prevent errors in data processing.
For model creation and management, you have two options in AryaXAI:
1. Build models through AutoML feature in AryaXAI, or
2. Upload your own models
Uploading custom models
This can be done through: Projects> Data settings > Model upload.
When uploading own models, users need to:
Define the model name: Provide a name for your model.
Specify Model Architecture: Indicate whether the model is based on machine learning or deep learning (deep learning support is coming soon).
Specify Training and Testing Data: Provide the datasets to be used for training and testing the model.
Note: Ensure that all features used in your model are presented in the data you have uploaded ie; data points in the model and uploaded file need to be the same
Note: If the training and testing data are not provided, AryaXAI will automatically select a random sub-sample from the training data to use as testing data. This allows AryaXAI to benchmark your uploaded model against a subset of the training data.
Build models
For detailed instructions, refer to the AutoML section in the documentation.
Upon accessing the project dashboard, your initial task is to upload pertinent data sets. These may encompass data utilized for training, testing, validation, production, or any other data integral to your project's scope and requirements.
Note: During the initial data upload process, whether through the dashboard interface or API integration, it is imperative to begin by uploading at least one sample dataset from the dashboard. Subsequently, users can proceed to define the requisite data settings. Additional datasets may also be uploaded seamlessly via the API following this initial setup.
To initiate the data upload process, begin by selecting the upload type from the dropdown menu, which offers the options of 'Data', 'Data Description' and ‘Feature mapping’. In Data description, users can add description for the data columns.
For data uploads, it's necessary to specify the 'Upload Tag' from the dropdown, where you can specify the data type - Training, Testing, Validation, or you can choose to add a custom tag as well.
Note: You can only upload one file at a time, and the file can only be in CSV format.
Users have the flexibility to upload files either by dragging and dropping them or by selecting the CSV file for uploading directly. After adding the file, proceed by selecting the 'Upload File' option to initiate the upload process.
Note: CSV files are limited to 200 MB for default server and 1 GB for custom server on workspace/ project
Tip: When uploading data, if you receive an error message stating that the file already exists, navigate to the 'File Info' section and delete the existing file, if the processing is not completed.
Once the upload is complete, you will be directed to ‘Data Config’ to configure the details.
Data Config.
Data Configuration serves as the foundational framework encompassing crucial high-level details essential for all subsequent operations within the project, and cannot be changed once set.
Begin by specifying the project type, which may involve either classification or regression tasks
Define the ‘Unique identifier’ - Assign a unique identifier to each data point within the dataset. This identifier distinguishes individual data entries and aids in data management and analysis.
Select the true label - Identify the true label, which represents the target variable to be predicted. For instance, in a real estate dataset, the true label could denote the 'Sale Price' of a property.
If applicable, choose the predicted label from the provided dropdown menu. This step is important when evaluating predictions generated by an existing model.
Feature Exclusion: Select features (data points) to be excluded. There might be multiple features within your project. You can exclude the features that might not be relevant to your project from the ‘Features exclude’ option. You can see all the features included and excluded on the right.
There might be multiple features within your project. You can exclude the features that might not be relevant to your project from the ‘Features exclude’ option. You can see all the features included and excluded on the right.
Note: Your data can have some duplicate unique identifiers, which can be dropped by selecting the checkbox.
True and Predicated label
The predicted label is essential if you intend for the XAI model to explain the predictions generated by your model. If the predicted label is not explicitly defined, AryaXAI will automatically select the true label to construct the XAI model.
Note: When defining data features, specifically the data settings, it should be noted that these settings serve as the foundation for training the explainable model. The feature selection conducted during this stage should closely align with the final set of features utilized in your model. This alignment ensures consistency and accuracy in the interpretability analysis provided by AryaXAI.
The ‘Features’ section displays the data type. The platform starts analyzing the data and creates an explainability model for you.
Note: Until the XAI model is not trained, the explanations (Feature importance) will show ‘nan’ as the values, you can upload any new file or open case view pages. Once the model is trained, the XAI model results can be seen in all these pages.
Once you submit the Data Config, you will be directed to the 'Project Summary' page, which is your project homepage. This page displays 3 tabs - Summary, Data Diagnostics and Model Diagnostics.
Summary
The Summary tab displays:
Total data volume and Unique features
Overview of data uploaded, which you can filter based on tag
Volume graph, offering a comprehensive overview of data upload activity over time. Users can delve into various parameters to plot data activity conveniently. These parameters include User tag, Feature name, date feature name, range, and plot type, for writing codes with ease.
Model Info
Data diagnostics
Once you upload data, AryaXAI automatically performs a comprehensive analysis of the added datasets for your initial file. This analysis includes data profiling, data modeling, and explainability. You can easily perform these tasks manually at a later stage if needed.
Selecting the 'Refresh Data' tab located at the top right corner of the interface invokes the data profiling task. This will display a dropdown menu where you can select the desired compute server. After choosing the appropriate compute server, click to refresh your data.
Within the Data Diagnostics tab, the Data Summary table offers an overview of the total data volume and unique features.
The Data Warning section highlights any inconsistencies detected in the uploaded analytical data. These warnings encompass various issues, including missing data, high feature correlation, high cardinality, and more.
Model Performance
This section displays a benchmarking for the different models you have created.
The ‘Model stability’ table in the Model performance tab displays the same model details as seen in the AutoML section. This section displays the performance of all models that are currently in production or staged for production.
In the 'Data Stability' section, users can assess data drift between two models for a comprehensive overview. After uploading your initial training data, if you upload a second file containing test data, a data drift report will be automatically generated. This report provides insights into the differences and stability between the training and test datasets.
By selecting the Baseline and Current tags, users can conduct a detailed comparison, which includes features, detected drift, method, feature type, drift score, and more.
Data Settings
In the data settings section, there are four tabs - Upload data, Model upload, File info, and Data settings. Here, users can upload, delete and manage the uploaded data files efficiently.
Data upload
Users can upload data from this section as well. When defining data features, specifically the data settings, it should be noted that these become the base for explainable model training. Therefore, the feature selection process should align with the final features employed in the model.
Model upload
When uploading own models, users need to:
Define the model name: Provide a name for your model.
Specify Model Architecture: Indicate whether the model is based on machine learning or deep learning (deep learning support is coming soon).
Specify Training and Testing Data: Provide the datasets to be used for training and testing the model.
Note: If the training and testing data are not provided, AryaXAI will automatically select a random sub-sample from the training data to use as testing data. This allows AryaXAI to benchmark your uploaded model against a subset of the training data.
Note: Ensure that all features used in your model are presented in the data you upload ie; data points in the model and uploaded file need to be the same
For details on Uploading and creating models, refer to Modeling section.
File Info
The File Info tab presents a comprehensive list of uploaded files, including additional upload details such as the user responsible for the upload, data file type, tag, creation date and time. Users can also delete uploaded files directly from this tab.
Data settings
The Data settings tab showcases the data configuration details set while uploading the data. Here, users can see details like base model being used, features used and excluded in the project, etc.
To modify the data settings, select the ‘Update config’ option present in ‘Data settings’. Whenever the settings are modified, the explainability model is retrained again.
Data addition from API
First, Get the API Token for the Project. This is accessible at Workspace > Projects > Documentation.
The project token (and Client Id) is accessible only to the Admins of the particular project. You can refresh your API token through the ‘Refresh token’ option provided beside the Client Id.
Below this, the project URL for uploading the data is displayed. The Python script is present, which can be used directly in your compiler.
The header XAI token needs to be defined, whereas the Client Id and project name are automatically defined.
Next, prepare Data in Format of Dictionary (you can upload multiple data points in the list of dictionary format)
Define the unique identifier for the data:
"unique_identifier":
A single data set can be passed in string format. If multiple data sets are uploaded, you need to pass a list of unique identifiers.
Similarly, a single data point (with one unique identifier and 3-4 columns) can be directly passed through the API. However, for multiple data points, a list of unique identifiers and column needs to be created.
For every post request, data successful responses and acknowledgements are provided, so you are updated on the status.
Data addition from SDK
To upload data, we need to pass the file path and Tag.
Note: If you are uploading data for the first time, you need to pass Config as well.
Data can be uploaded to the project either directly with a file or by passing the Pandas DataFrame.
To configure the details in ‘Project config’ and upload data through our SDK, you can use the following commands:
config = {
"project_type": "classification", # The Prediction Type of your project (classification / regression)
"unique_identifier": "Id", # unique identifier for your project
"true_label": "SaleCondition", # Target label
"pred_label": "", # Predicted value in case you have it
"feature_exclude": [], # feature you don't want Arya Xai surrogate model to use for modelling
}
# Data is diffrentiated using Tag
Tag = 'Training' # Data is diffrentiated using Tag
#To upload the data into the project. This will also build the initial ML model.
project.upload_data('file_path','tag', config)
Once the data is uploaded, you can also view the files, and file info through SDK.
#Check the files that are uploaded in the project.
project.files()
Additional functions:
#You can get the summary for the specific file: Missing values, Max/Min, Data type.
project.file_summary()
#To know all the settings: Data, Data Encoding & Model params
project.config()
project.all_tags()
Additionally, you can also delete the uploaded file:
#project.delete_file('file_name')
To fetch all tags which user has uploaded:
project.tags()
SDK operations: Data Summary and Diagnostics
Accessing the Data Summary table:
To view the summary of data features and the data types through SDK:
# data summary
project.data_observations('Training') # Can pass any Tag for getting Summary
# data diagnosis
project.data_warnings('Training')
To fetch default data drift report generated after uploading train and test data:
Different projects require diverse computational requirements. AryaXAI enables users to tailor server resources to match specific project demands. This customization enhances performance, scalability, and resource utilization.
AryaXAI offers both Serverless ML and custom server options based on your subscription plan.
Custom server options include small, medium, and large servers, enabling users to choose the server size that best fits their project requirements. This allows for performance optimization by selecting appropriate server sizes for different tasks. Additionally, users can scale up or down as project or task needs change, ensuring optimal resource utilization.
These features allow AryaXAI users to efficiently manage computational resources, ensuring that projects run smoothly and effectively, without unnecessary delays or resource wastage.
Once configured, the custom server becomes the default for all subsequent operations. Servers can be managed by workspace Admins and Managers via the 'workspace settings' option.
Automating Server Boot and Shutdown
When using a custom server, users have the option to automate the server boot and shutdown at both the workspace and project levels. To enable this feature, select the settings icon for the desired project or workspace. Then, navigate to the 'Workspace server settings' section and set the server start and stop times.
Additionally, users can choose to auto-shut down the server in case of inactivity. Specify the number of hours of inactivity after which the server will automatically shut down.
These functionalities are also available in the AryaXAI SDK
When a user joins AryaXAI, they start with their personal organization by default.
In this personal organization, users cannot add new users to the organization itself. However, they can invite others to collaborate within a specific workspace or project.
In a shared organization, you have the ability to add other users and share workspaces or projects.
However, to add a user in a project or workspace, they must first have access to the organization containing the workspace or project.
You can create up to five organizations in AryaXAI, each capable of hosting multiple workspaces. Overall usage is tracked at the organization level based on your subscription plan.
To create a new organization or switch between existing ones, use the 'Switch organization' option in the dropdown menu of your profile in the top right corner.
Manage and invite Team Members in organization
To add team members to your shared organization, follow these steps:
Select the ‘Organization settings’ option from the main menu on the left
Navigate to the ‘Team’ tab
Enter the email address of the team member and select ‘Send Invite’
This page also provides users with a dashboard view of new and existing team members within your organization. From this page, you can configure usage control, manage access to custom and batch servers, hand over admin controls, or also remove users from the organization.
Note: There can be only one owner for a Organization
Note: Organization access overrides the workspace and project access. For instance, if a user is granted 'Admin' access at the organization level and 'Member' access at the workspace level, the 'Owner' access prevails over the 'Member' access.
Workspace
On accessing the platform, you can either set up a new workspace or access an existing workspace.
Creating a workspace
To create a new workspace, navigate to the ‘Create workspace’ on the dashboard. Provide a name for the workspace and select the machine you want to run your remote environment on. Once submitted, the newly created workspace will be displayed on the dashboard, featuring details such as the workspace owner, creation date, time and the people who have access to the workspace.
The three dots on the top right right corner of the specific workspace provide additional options to invite and manage team members, modify workspace settings and also to delete the workspace.
Workspace Settings
The workspace settings section provides the ‘Profile settings’ tab where you can rename or delete the workspace, and the ‘Workspace Server settings’ where you can change the machine you are runnig your remote environment on.
Members tab displays the list of all the members within the particular workspace. This list shows all the users you have invited to the workspace and the status of their invitations.
You can also invite/ add users in the workspace through the ‘Invite Member’ option. Once added to a workspace, the user will have access to all projects within the workspace. You can however, control the level of access they can have by defining their role for the workspace. The role can either be of an Owner, User or Manager. Each role has specific use access criteria as defined below. You can also modify the user role or revoke access through the workspace settings.
User role wise accessibility criteria:
The ‘Usage’ tab within workspace settings provides a comprehensive overview of the workspace usage capacity, with details such as number of projects, users, data points, observations etc.
Accessing an existing workspace
To access an already created workspace, the workspace owner has to send an invite through the ‘Add User’ option. This invitation to the workspace is shared to the invitee's email address provided.
Project
Upon accessing a workspace, a list of all the projects within the workspace will be displayed. A project represents the ML use case you want to solve eg. Loan status prediction, house sale prediction.
Creating a project
To create a new project, select the ‘Create new project’ option on the top right corner within the workspace. Define a name for the project and select the machine you want to run your remote environment on. The project details will be visible on the dashboard with details of the project owner, creation date and time.
Note: If you want a user to have access to a particular project and not the entire workspace, you can select the ‘Add user’ option under ‘Actions’.
You can define the roles of new users as an Owner/ Admin, User or Manager.
Note: Workspace access overrides the project access. For e.g. If you have given a user ‘Owner’ access at the Workspace level and a ‘User’ access at the project level, the Owner access overrides the User access. However, the user must be a part of the organization.
You can define the roles of new users as an Owner/ Admin, User or Manager. You can also modify the user role or revoke access through the workspace settings.
User role wise accessibility criteria:
Note: Workspace access overrides the project access. For instance, if a user is granted 'Owner' access at the Workspace level and 'User' access at the project level, the 'Owner' access prevails over the 'User' access. Similarly, Organization access overrides the workspace and project access.
SDK operations
Organization
To Create a organization
# aryaxai.create_organization("sdktest")
List all organizations
aryaxai.organizations()
Get details of organization members
organization.member_details()
Add or remove user from organization
# Add user to organization
organization.add_user_to_organization("amey.balekundri@gmail.com")
# Remove user from organization
organization.remove_user_from_organization()
Access personal organization
# Personal Space
organization = aryaxai.organization("personal")
Workspace
The XAI object instance has all the necessary methods to interact with your workspaces. You can use the following functions to create a new workspace, list all existing workspaces and select a particular workspace.
# creating a new workspapce name
workspace = aryaxai.create_workspace('workspace name')
# list all workspaces available
aryaxai.workspaces()
# select a workspace name
workspace = xai.workspace('sdk-testing')
Create a new workspace with custom server configurations
# create a new workspace with custom server configurations
# for avalilable custom server configuration check xai.available_custom_servers()
# workspace = aryaxai.create_workspace('workspace name', "t3.medium")
The AryaXAI Python SDK also provides users with functionalities to edit, manage or delete workspaces
# rename workspace name
workspace.rename_workspace('new_workspace_name')
# Deleting Workspace
#workspace.delete_workspace()
# add user to workspace
workspace.add_user_to_workspace('email', 'role')
# remove user from workspace
workspace.remove_user_from_workspace('email')
# update user access for workspace
workspace.update_user_access_for_workspace('email','role')
Start, stop or update custom server of workspace
# start custom server of workspace
# workspace.start_server()
# stop custom server of workspace
# workspace.stop_server()
# update custom server of workspace
# workspace.update_server('t3.medium')
Project
With the AryaXAI Python SDK, users can efficiently handle project management tasks and carry out essential functions within those projects.
# create project in workspace
workspace.create_project('project_name')
# create project name
project = workspace.create_project('project_name')
# list of projects
projects = workspace.projects()
# select a project name
project = workspace.project('project_name')
# delete project
project.delete_project()
# rename project
project.rename_project('new_project_name')
Create project with custom server
workspace.create_project('Loan Status Prediction', 't3.medium')
Start, stop or update custom server of project
# start custom server of project
# project.start_server()
# stop custom server of project
# project.stop_server()
# update custom server of project
# project.update_server('t3.medium')
Invite, remove and manage user access for projects
# add user to project
project.add_user_to_project('email', 'role')
# remove user from project
project.remove_user_from_project('email')
# update user access for project
project.update_user_access_for_project('email','role')
When you log in to AryaXAI, the homepage displays all the workspaces you created or are a part of. Workspaces are the parent folders in which you have project(s).
Within AryaXAI, your main navigation tool is located on the left-hand side of the interface. This menu will guide you through your entire product journey.
The first three icons on this menu are:
- Home: Quick access to your personalized dashboard
- SDK Access Token: Generate and manage access tokens required for integrating AryaXAI SDKs into your applications.
- Account Settings: Customize settings, email preferences, and web appearance.
The last two icons are directed to our help section, to access comprehensive support resources, and log out option to securely sign out of your AryaXAI account when your session is complete.
At the top right corner of the AryaXAI interface, you'll find essential options for managing your experience:
- Appearance: Customize the appearance of the web interface with options for light and dark themes.
- Documentation: Easily access detailed documentation to assist you in integrating AryaXAI's AI capabilities into your applications effectively.
- Notifications: Stay informed with real-time updates and alerts about your projects and tasks within AryaXAI.
- Profile Dropdown: Access your profile settings and account management options, including personal information, profile settings, organization settings and logout functionality.
Profile Settings
The Profile Settings page allows users to manage and customize their AryaXAI account. Users can update profile details, such as modifying personal information and securely changing their password. Additionally, users can customize the web appearance of the AryaXAI interface to match their preferences.
Usage
The user tab in Profile settings enables users to track usage statistics at both the workspace and project levels, helping them monitor resource utilization and performance effectively.
The Key Metrics section here provides an overview of essential statistics related to your AryaXAI usage, including the number of workspaces, projects, data points, storage used, dashboards, and AutoML predictions.
Workspace Plan Usage
The Workspace Plan Usage section provides details on workspace-level usage according to your subscribed plan.
Compute Usage
The Compute Usage section tracks the credits used on a workspace and project level. You can also track the usage for a specific period of time and download the usage data. This allows you to monitor your computational resource consumption and manage your credits efficiently.
Set up your AryaXAI account in just a few easy steps!
To get started, you'll need an invitation to sign up. If you haven't received one yet, please reach out to our team with your request.
Once these steps are completed, a confirmation of account verification is sent to your inbox, through which your organization can be accessed.
SDK: Signing up for AryaXAI through SDK
With our SDK, you can perform nearly every action available in the AryaXAI GUI.
Prerequisite:
Sign up and log in to AryaXAI using the steps mentioned above.
After logging in, generate an Access Token for your user account. You can find this option in the profile dropdown menu at the top right corner of AryaXAI dashboard.
Set the environment variables ‘XAI_ACCESS_TOKEN’ with the generated value.
You can create multiple tokens from here.
Note: The SDK Access Token is unique and is linked to your User ID, not the Organization ID.
Once you've completed these steps, you're all set! Now, you can easily log in and start using the AryaXAI SDK:
Log in by importing the "xai" object instance from the "arya_xai" package.
Call the "login" method. This method automatically takes the access token value from the "XAI_ACCESS_TOKEN" environment variable and stores the JWT in the object instance. This means that all your future SDK operations will be authorized automatically, making it simple and hassle-free!
NOTE: The creation of tokens is limited to a maximum of five per user. Furthermore, each token can only be used in a single instance at any given time.
AryaXAI SDK installation and usage
#!pip3 install aryaxai --upgrade --no-cache
!pip3 show aryaxai
Importing & Authenticating SDK
# required for test enviornment
import os
os.environ["XAI_ACCESS_TOKEN"]='' # test env access token
# Import arya xai module
from aryaxai import xai as aryaxai
## login() function authenticates user using token that can be generated in app.aryaxai.com/sdk
aryaxai.login()
Enter your Arya XAI Access Token: ··········
Authenticated successfully.
# See Notification you get across all workspaces and projects
aryaxai.get_notifications()
The AryaXAI platform is built to optimize collaboration and management. At the highest level, we have Organizations, which contain Workspaces. Within each workspace, you can create and manage Projects. This hierarchy ensures a streamlined and efficient workflow, allowing you to manage resources and track usage effectively based on your subscription plan.
As more teammates collaborate, it’s important to ensure that everyone has the right access levels and permissions to keep your models running safely. All user access controls are maintained at the Organization, Workspace and Project levels.
You can also share invites with your team members on a organization, workspace or a project level.
The ML Observability platform for mission-critical AI solutions
Introducing AryaXAI
With AryaXAI, Data science and ML teams can monitor their models in production as well as gain reliable & accurate explainability,
AryaXAI offers reliable & accurate explainability, offering evidence that can support regulatory diligence, managing AI uncertainty by providing advanced policy controls and ensuring consistency in production by monitoring data or model drift and alerting users with root cause analysis.
AryaXAI also acts as a common workflow and provides insights acceptable by all stakeholders - Data Science, IT, Risk, Operations and compliance teams, making the rollout and maintenance of AI/ML models seamless and clutter-free.
AI and ML technologies have found their way into core processes of industries like financial services, healthcare, education, etc. Even with multiple use cases already in play, the opportunities with AI are unparalleled and its potential is far from exhausted.
However, with increasing use of AI and ML among AI-driven organizations, ML engineers and decision makers who rely on AI outcomes, are now faced with explaining and justifying the decisions by AI models. Decisions have already been made, with the formation of various regulatory compliance and accountability systems, legal frameworks, requirements of Ethics and Trustworthiness. Ultimately, an AI model will be deemed trustworthy only if its decisions are explainable, comprehensible and reliable.
Today, multiple methods make it possible to understand these complex systems, but they come with several challenges to be considered.
While ‘intelligence’ is the primary deliverable of AI, ‘Explainability’ has become the fundamental need of a product. It helps to serve important purposes like:
Accountability
Trust and transparency
Better Model Pruning
Better AI controls
Arya.ai has innovated a state-of-the-art framework, ‘AryaXAI’ to offer transparency, control and Interpretability on Deep learning models. In this documentation, we will explores the explainability imperative, tangible business benefits of XAI, overview and challenges with current methods, and details on the functioning of AryaXAI framework.
By clicking “Accept All”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.