Plain AI in plain English
About

In the wild · what's new on Google

"AI overview" in search

30-second gist~30s read

Most search engines now put a small AI-generated answer at the very top of results. It's quick, often useful, sometimes confidently wrong, and almost never sourced clearly enough to verify in a hurry.

Treat it the same way you'd treat a friend's quick opinion: a fine starting point, not the last word. The real sources are still the blue links underneath.

If you want more

When to trust it, when to scroll past~1 min

Probably fine for: definitions, simple how-tos, "what's the capital of...", recipes, broad-strokes context.

Don't rely on it for: medical advice, legal advice, financial decisions, anything dated or geographic ("is X open today?", "what's the speed limit on Y road?"), and anything where you need an actual source.

The AI overview is built on top of the same engines that power chatbots — so it inherits their tendency to confabulate when uncertain. Hallucinations show up here too.

A real misfire

Within days of Google rolling out AI overviews to US users in May 2024, screenshots went viral of it suggesting people add glue to pizza so the cheese sticks, and eat at least one small rock per day for minerals. The AI had pulled both from satirical Reddit posts. Google has tightened the system since, but the lesson stuck: the overview is a confident first draft, not a final answer.