Looking ahead · the legal question
AI personhood
30-second gist~30s read
AI personhood is the question of whether an AI system has any kind of legal standing — to own things, to be responsible for things, to be sued or to sue. As of 2026, no country grants AI the kind of functional legal personhood that lets it own assets, sign contracts, or be sued. Saudi Arabia's 2017 grant of citizenship to the Sophia robot was a publicity stunt with no real legal substance.
But the question is being asked in courts in a way it wasn't a few years ago, mainly because AI agents now take actions with real consequences. Someone has to be responsible. The law is figuring out who.
If you want more
Why anyone's even asking
An AI agent now buys things, sends emails, writes contracts, books travel, runs workflows. When the agent makes a bad trade, sends the wrong contract, books the flight on the wrong date — who is responsible? Three answers compete:
- The company that built the AI. Liable for foreseeable misuse, like any tool maker.
- The company or person that deployed the AI. Responsible for choosing it and supervising it.
- The AI itself. Some legal theorists argue for granting AI a limited form of personhood — like a corporation has — so it can hold insurance and be sued.
Where things stand in 2026
No country grants AI functional legal personhood — the right to own assets, sign contracts, or be sued. Saudi Arabia's 2017 grant of citizenship to the Sophia robot is the lone documented exception, widely treated as symbolic. Liability runs to the deploying entity (the company that ran the AI) or the user (the person who told it to act). The EU AI Act, US sectoral laws, and UK courts are all converging on roughly the same line: AI is a tool; humans and organisations are accountable for what they use it to do.
That line will hold for the foreseeable future. The interesting changes will be in how liability is shared, not in who counts as a person.