Plain AI in plain English
About

Trust & truth · invisible markers

Watermarking AI content

30-second gist~30s read

Big AI companies have promised watermarks — invisible markers in their output that detection tools can spot. The idea: every AI-generated image or piece of text carries a quiet "this came from us" signal.

For images, this works moderately well in the short term. For text, it's much harder — a few light edits or a copy-paste through another tool erases the signal. As of 2026, watermarking is a useful layer, but not a guarantee.

If you want more

Why text watermarking is hard~30s

Text watermarking works by tilting the AI's word choices toward a particular statistical pattern that detectors can spot. The pattern is fragile: paraphrase the text, run it through a different AI, change a few words by hand — the signal degrades fast. Researchers have shown several ways to strip a watermark in seconds.

C2PA — the provenance approach~30s

A different approach is C2PA (Coalition for Content Provenance and Authenticity), backed by Adobe, Microsoft, the BBC, Intel, Arm, Truepic, and a wider 6,000-member industry group that includes Sony, Nikon, and others. Instead of hiding a signal inside the image, it attaches signed metadata: "this image was made by tool X on date Y". When the metadata travels with the file, it's verifiable. When the image is screenshotted or re-encoded, the metadata is usually lost. Better than nothing, not bulletproof.