AI Regulations in China
AI Regulations in the European Union (EU)
AI Regulations in the US
AI Regulations in India
Model safety
Synthetic & Generative AI
MLOps
Model Performance
ML Monitoring
Explainable AI
Explainable AI

LOCO

The LOCO method seeks to understand the significance of a particular feature for a model's prediction performance.

The Leave-One-Covariate-Out (LOCO) method seeks to understand the significance of a particular feature for a model's prediction performance. By calculating the mean change in accuracy for each variable throughout the entire data set, LOCO generates global variable importance measures and can provide confidence intervals.

Liked the content? you'll love our emails!

Thank you! We will send you newest issues straight to your inbox!
Oops! Something went wrong while submitting the form.

See how AryaXAI improves
ML Observability

Learn how to bring transparency & suitability to your AI Solutions, Explore relevant use cases for your team, and Get pricing information for XAI products.