AI Regulations in China
AI Regulations in the European Union (EU)
AI Regulations in the US
AI Regulations in India
Model safety
Synthetic & Generative AI
MLOps
Model Performance
ML Monitoring
Explainable AI
Synthetic & Generative AI

Hallucination

Model output generated that is not grounded in the input data but instead is generated imaginatively or erroneously

A 'hallucination' is a model output generated that is not grounded in the input data but instead is generated imaginatively or erroneously. The phenomenon involves the model perceiving patterns or objects that are not present or suggested in the input data, leading to the creation of nonsensical or inaccurate outputs - it 'hallucinates' the response.

These hallucinations can occur due to various factors such as inaccurate or biased training data, overfitting or highly complex tasks,  Ambiguous or unclear input or high model complexity. To prevent such model hallucinations, users can consider a combination of diverse, representative training data, careful model architecture design, effective regularization techniques, and ongoing evaluation and fine-tuning.

Liked the content? you'll love our emails!

Thank you! We will send you newest issues straight to your inbox!
Oops! Something went wrong while submitting the form.

See how AryaXAI improves
ML Observability

Learn how to bring transparency & suitability to your AI Solutions, Explore relevant use cases for your team, and Get pricing information for XAI products.