How AI works · trust
Hallucination
30-second gist~30s read
A hallucination is when an AI says something untrue with full confidence, as if it were a fact. It's not lying — there's no intent. The AI is built to predict the next plausible word, not to know what's true.
The danger is the tone. The wrong answer arrives sounding exactly like a right one. Cross-checking is the only reliable defence.
If you want more
Why does AI do this?
Imagine someone who has read most of the internet but has no way to look anything up. Ask them "what year did Lincoln die?" — they remember and tell you. Ask them "what's the email address of the chief of police in Christchurch?" — they don't know, but they've seen so many email addresses that the shape is familiar. So they invent one. It looks plausible. It is not real.
That's roughly what every chatbot is doing. There is no internal "is this true?" check. The output is generated word-by-word, each word picked because it's likely to follow the previous ones. Truth isn't in the loop.
Real examples
New York lawyer, 2023 · 6 fake citations
A lawyer used ChatGPT to research a brief. ChatGPT cited six previous cases that perfectly supported his argument. Every case had been fabricated — judges, courts, quotations. He filed it anyway. The judge found out, sanctioned him, and made the case a textbook warning.
Air Canada chatbot, 2024 · invented refund policy
Air Canada's website chatbot told a grieving customer he could buy a full-fare ticket and apply for a bereavement-rate refund within 90 days. The policy didn't exist — the chatbot had invented it. When Air Canada refused the refund, a Canadian tribunal ordered the airline to honour what the chatbot had said. The ruling made it clear: a company is responsible for what its AI tells customers, even when the AI is wrong.
When does it happen most?
- Recent events the AI wasn't trained on.
- Very specific facts: names, dates, citations, dosages, prices.
- Anything where you've pushed it to give a definite answer.
How do I check an AI answer?
- For citations, click through. If the link doesn't load, the citation is probably invented.
- Cross-check anything that matters: medicine, law, money, dates.
- Ask "are you sure?" — sometimes (not always) it walks the answer back.
- Treat AI like a friend who's bluffing well — useful, not authoritative.
- Don't quote it to a colleague as if it were a source.