How AI works · synthetic media
Deepfake
30-second gist~30s read
A deepfake is a photo, video, or voice clip that looks and sounds real but was made by a computer using AI. The technology to make a passable one is now free and runs on a laptop.
The harm comes when one is presented as evidence — a politician "confessing", a relative's voice in a panicked phone call, a celebrity "endorsing" a scam. The tool itself isn't bad; the intent behind it is.
If you want more
How does it actually work?
Modern AI can read a few minutes of someone's voice, or a handful of photos of their face, and learn the patterns that make them them. It then generates new footage by combining those patterns with whatever script you feed in.
The word is a portmanteau of deep learning (the underlying technique) and fake. The first widely-noticed deepfakes appeared on Reddit in 2017. Quality has improved every year since.
Real examples
Hong Kong, 2024 · £20m fraud
A finance worker joined what looked like a routine video call with his colleagues and CFO. Every face was a deepfake. He authorised £20m of transfers before realising no real human had been on the call.
"Mum, I'm in trouble"
Voice-clone scams targeting parents have surged since 2023. Scammers harvest a child's voice from social media, then phone the parent panicked, asking for cash transfers. Police forces in three countries have issued public warnings.
What to do if you suspect one
- Pause before acting on anything emotional or urgent.
- Hang up and call back on a number you already trust.
- Ask a question only the real person would know.
- Agree a family safe-word now, before you need it.
- If money is involved, talk to one other person before transferring.
Are deepfakes always bad?
No. The same technology is used to dub films into other languages, to restore the voices of people with motor neurone disease, and to let museums "interview" historical figures. The harmful use is fraud and disinformation; the benign uses are quietly common.