Trust & truth · the uncertainty prompt
Getting AI to admit its limits
30-second gist~30s read
By default, most AI is built to sound confident. With a few specific prompts, you can shift it toward admitting what it doesn't know. None of these are magic — they tilt the conversation, they don't solve hallucinations.
Worth keeping a couple as habits. They cost nothing and they catch a meaningful share of confidently-wrong answers.
If you want more
Three prompts that help
- "Rate your confidence in this answer from 1 to 10, and explain why." Surprisingly, this works. The AI often picks 5 or 6 on questions it would otherwise have answered with full assertiveness.
- "What would change if I'm wrong about my premise?" Forces the AI to consider the framing, not just the question. Catches a lot of sycophancy.
- "Before you answer, list three things you don't know that would change the answer." Pushes the AI toward listing its assumptions before committing.
The limits of these prompts
None of these turn an unreliable answer into a reliable one. They expose uncertainty that was already there. The AI can still be confidently wrong on the second try. And on questions it has been heavily trained to answer (medical, legal, political), it may refuse to express doubt at all.
Treat the answer-after-doubt as more trustworthy, not fully trustworthy.