Plain AI in plain English
About

Trust & truth · the refusal

When AI refuses to answer

30-second gist~30s read

Sometimes you ask a perfectly reasonable question and the AI politely refuses. "I can't help with that". "I'm not able to discuss". This isn't always sensible safety behaviour — sometimes it's an over-trained guardrail misfiring.

Two things help: knowing why it happens, and having a few phrases that reliably get a real answer when the refusal was over-cautious.

If you want more

Why the AI refuses~1 min
  • Genuine policy. The model was trained not to give weapons instructions, child sexual abuse material, suicide methods, etc. These refusals are deliberate and reasonable.
  • Over-cautious training. The model has learnt that some keyword combinations are risky and refuses anything nearby — including innocent questions about chemistry, history, or fiction.
  • Topic policies. Some companies block legal advice, medical advice, or political topics, regardless of context. This is policy, not capability.
  • Adversarial detection. If the AI thinks you're trying to trick it, it shuts down even on benign follow-ups.
What to do when the refusal seems over-cautious~30s
  • Restate the question with explicit context: "I'm a [teacher/parent/nurse/researcher] asking about X for [reason]".
  • Ask a different way: instead of "how does Y work", try "explain to me what readers need to understand about Y".
  • Try a different model. Each has different policies; what one refuses, another answers.
  • If the refusal is genuinely about a sensitive topic — accept it. The guardrail might be the right call.