Plain AI in plain English
About

Trust & truth · what's actually happening

Lying vs confabulation

30-second gist~30s read

"The AI lied to me." It feels true. It's not quite what's happening. Lying needs intent — a desire to deceive. AIs don't have intent. What they do is confabulate: produce a confident, plausible answer because they've been built to produce confident, plausible answers.

Same outcome (you got told something untrue). Different mechanism. The fix is different too.

If you want more

Why this difference matters~1 min

If you assume the AI lied, you assume catching it red-handed will fix things — like with a person. With confabulation, no amount of pressure changes the underlying behaviour. Asking "did you lie to me?" gets you a sincere apology, then sometimes a fresh confabulation.

The fix isn't moral, it's procedural: don't put AI in positions where confabulation costs you. Get answers it can't make up (math you check, code you run, facts you verify). Avoid asking it questions where you can't tell good from plausible.

The borrowed word~30s

"Confabulation" comes from neurology, where it describes patients with certain brain injuries who fluently invent autobiographical details to fill memory gaps — and genuinely believe their own answers. The metaphor's not perfect (the AI doesn't believe anything) but it's closer to the truth than "lying" or even "hallucinating".