The tendency of LLMs to generate plausible-sounding but factually incorrect or fabricated content. Hallucination is a primary enterprise risk for generative AI and is mitigated through RAG, grounding techniques, output validation, and human-in-the-loop review processes.
Buchen Sie eine Beratung, um zu besprechen, wie KI-Konzepte auf Ihre Herausforderungen anwendbar sind.