PlainAI museum of AI
About

Looking ahead · the political fight

Open vs closed AI

30-second gist~30s read

For most ordinary users, the open-versus-closed argument changes three things: how much it costs to use AI, how private your data stays, and who has the power to switch features off. Underneath that, it's a fight between two camps of humans: one that thinks powerful AI should be locked behind paid services for safety, and one that thinks it should be downloadable by anyone for openness.

Most news coverage frames this as US vs China. The real fault line cuts through the middle of the US too.

If you want more

What each side actually wants~1 min
  • Closed-AI camp (OpenAI, Anthropic, much of Google): the most powerful models should be served through controlled APIs so safety teams can monitor use, catch misuse, and patch problems centrally. Releasing weights is irreversible — once they're out, anyone can fine-tune them for harm.
  • Open-AI camp (Meta, Mistral, DeepSeek, much of academia): the most powerful models should be downloadable so they can be inspected, audited, fine-tuned, and used without dependence on a handful of US companies. Closed AI is a single point of control over a transformative technology.

Both have real arguments. Both also have commercial motives. The closed labs charge per use; the open releases shift competitive pressure onto the closed ones.

Why this matters to ordinary users~30s

If open wins: AI gets cheaper, more private (you can run it locally), easier to audit, and harder for any one country to control. Also: easier to misuse, weaker safety controls, more uneven quality. If closed wins: safer common defaults, faster fixes for new abuses, but a small group of companies decides what AI can and can't do for everyone. The healthiest outcome is probably both, in tension. That's roughly where we are.