Plain AI in plain English
About

In the wild · the device on your shelf

AI in your smart speaker

30-second gist~30s read

Your smart speaker used to do simple template matching: "play [song]" → play song. The current generation runs on real large language models. They handle messy questions better, can hold a real conversation, and remember context across multiple commands.

That power costs something on the privacy side. More of what you say is sent to the cloud for the LLM to process — wake-word detection still happens on the device, but the chat itself doesn't.

If you want more

What changed for privacy~30s

Older smart speakers did a lot of work on the device itself. The new LLM-powered modes route the voice transcript (and sometimes the audio clip) to the cloud, hold a context window across the conversation, and store more of what was said for personalisation. Most providers let you opt out of training-on-your-voice and the long-term audio store, but the defaults differ.

Three settings worth changing~30s
  • Voice history. Most apps keep weeks or months of your queries. You can delete it and turn off retention.
  • "Improve services with my voice" — usually on by default, almost always opt-out-able.
  • "Always-on" listening — the wake word is local; some "keep listening for follow-ups" features extend that. Turn off if it bothers you.