AI Regulations in China
AI Regulations in the European Union (EU)
AI Regulations in the US
AI Regulations in India
Model safety
Synthetic & Generative AI
MLOps
Model Performance
ML Monitoring
Explainable AI

Kullback-Leibler (KL) divergence

KL divergence, also referred to as relative entropy, calculates the difference between two probability distributions.

It is also known as relative entropy or information divergence (I-divergence).  It is not symmetric in the two distributions, meaning that the KL divergence between distribution P and distribution Q is generally not equal to the KL divergence between distribution Q and distribution P.

When comparing two distributions, KL divergence is helpful if one has a high variance or a limited sample size relative to the other.

Which is equivalent to

Additionally, KL divergence is asymmetric. In contrast to PSI, this indicates that the results will vary if the reference and production(compared) distributions are switched. Therefore, KL(P || Q) != KL(Q || P). This makes it handy for applications utilising Baye's theorem or when you have a large number of training (reference) samples but only a limited set of samples, resulting in increased variance in the comparison distribution.

A KL score can range from 0 to infinity, with 0 denoting equality between the two distributions. The output will be in "nats" if the natural log (base-e) is used, and "bits" if the KL formulae are taken to log base 2.

KL divergence can be used for both numerical and categorical features. The default KL score threshold for drift in AryaXAI is '0.05'.

Liked the content? you'll love our emails!

Thank you! We will send you newest issues straight to your inbox!
Oops! Something went wrong while submitting the form.

See how AryaXAI improves
ML Observability

Learn how to bring transparency & suitability to your AI Solutions, Explore relevant use cases for your team, and Get pricing information for XAI products.