Loading…
Loading…
When an AI model generates plausible-sounding but factually incorrect information. In agent contexts, hallucination is mitigated through RAG (grounding responses in your knowledge base), confidence scoring, and citation requirements. Critical in legal, healthcare, and finance agents where accuracy is non-negotiable.
Back to glossary