Plain AI in plain English
About

In the wild · synthetic media

Spotting an AI image

30-second gist~30s read

The classic giveaways — six fingers, melting hair, weird text — are mostly gone in 2026. The new tells are subtler, and a few are likely to vanish too. The honest answer: you can't always tell, and you're going to be wrong sometimes.

If the image matters (for a story, a purchase, a decision), don't trust your eyes alone. Reverse-image-search it.

If you want more

What still gives an AI image away (today)5 tells
  • Background details that fall apart on a close look — a "crowd" that's the same person three times, a sign whose letters dissolve into squiggles.
  • Lighting that doesn't match — the subject is lit from the left, the shadows are below, the reflection in the eyes points somewhere else.
  • Jewellery, watches, glasses, earrings — small repeating shapes still confuse the model.
  • Eerily smooth skin and perfectly even teeth on people who aren't models.
  • Photos with no source — no photographer credit, no agency, no original post. Just "look at this".
Tells that no longer work3 dead giveaways
  • Counting fingers — most generators have fixed this since mid-2024.
  • Looking for "watermarks" or weird text — text generation is much better.
  • Eyes that look glassy or off — the latest models render eyes more realistically than most cameras.

A real example

In May 2023, an AI-generated image of an "explosion near the Pentagon" went viral on X (Twitter). It was reposted by a verified account, briefly moved the US stock market, and was debunked within an hour — but the algorithms that trade on news headlines had already reacted. The image had visible AI artefacts in the architecture, but few people looked closely enough.