Skip to content Skip to sidebar Skip to footer



Large language models (LLMs) like ChatGPT have wowed the world with their capabilities. But they’ve also made headlines for confidently spewing absolute nonsense.

This phenomenon, known as hallucination, ranges from fairly harmless mistakes – like getting the number of ‘r’s in strawberry wrong – to completely fabricated legal cases that have landed lawyers in serious trouble.

error: Content is protected !!