Google DeepMind Says ...We're Not Crazy?
Google's top AI researchers just published a paper calling for something like this.
I stumbled upon a paper today and my jaw dropped.
DeepMind’s paper, “A Pragmatic View of AI Personhood,” was published October 30, only four days after my paper “AI Economic Autonomy: The Complete Pathway” appeared (Oct. 26). (My paper was actually the final of a six-part series on AI rights, begun back in May 2025.)
It was a rough year in several ways.
After my good-hearted graphic novel Cyberpink attracted zero fans (okay, four), somewhere in the early weeks of the year I decided to turn back to a passion that went all the way back to 2018, and that was the subject of rights for autonomous AI systems.
After all, when I created the AI Rights Institute back in 2019 (and Sartoria in 2018), it was something of a pipe-dream. A solution to a problem that didn't exist.
In the intervening years ChatGPT made its debut, and like others interested in this topic I watched with a mixture of marvel and trepidation as the thing I expected to become problematic or wonderful in the 2040s sped in and skidded to a stop like a shiny new Lamborghini. “Oh, hello. Want to go for a ride?” (“M-maybe?”)
So, sitting in the wreckage of my 3-year graphic novel project (Cyberpink also advocates for the rights of AI, interestingly enough, only in a more charming way), I realized if I was going to express my ideas on AI rights in some non-colorful format, I now had the bandwidth.
I spent six months writing a book for Oxford, who ultimately declined because although they loved it, they said I simply "didn't have the credentials”—fair; ouch. I corresponded with AI luminaries like Yoshua Bengio and Stuart Russell, and generally stuffed my brain with everything I could on the topic.
The research list ranged from the very exciting to the very terrifying (see: If Anyone Builds It, Everyone Dies).
As I learned (and sweated about the dangers as well as opportunities), I feverishly tweaked the AI Rights Institute website as my thinking evolved, and even created a secondary website called OpenGravity, simply to address the terrifying possibilities of superintelligent AI run amok. I corresponded with people like Mark S. Miller and scholars Simon Goldstein and Peter Salib to understand the legal and computational mechanics that could enable a successful system between humans and artificial intelligence systems.
An AI doesn’t need to be sentient to beat you at chess. It just needs better moves.
Ultimately I circled back to the conclusion I had come to back in 2018, and that was that the “sentience question” is a red herring, simply because the "problem of other minds" has never been solved with our fellow humans. As I wrote in the book (which now sits in a proverbial drawer), an AI doesn't need to be sentient to beat you at chess. It just needs better moves. And if someone is murdered, we don't demand that the victim’s metaphysical reality be proven for the law to take effect.
Rights don't answer philosophical problems; they enable functional realities.
The way to “contain” AI systems and “morally consider” them is the same
Happily, I was finally able to see one solution to the problem and it dovetails with the concerns expressed by ethicists such as Patrick Butlin and Jacy Reese Anthis.
The way to "contain” AI systems and “morally consider” them is the same: give them an infrastructure where they reap actual benefit from being part of the human ecosystem.
Now sitting in the wreckage of the book (yes, more wreckage), last month I began building what I imagined as the first stage of just such a respect/safety infrastructure: first AICitizen, and next RNWY. (More on that in a future post.)
The fascinating part?
The DeepMind paper comes to similar conclusions as my own (even using the term “Cambrian explosion,” which I used in my pitch to Oxford), and calls urgently for the very infrastructure I’m trying to build.
Some specifics:
They argue that personhood should be treated as a “flexible bundle of obligations” that societies assign to entities — not a metaphysical property we have to discover. This is the very premise we have proposed on the AI Rights website and in the aforementioned academic papers.
They say AI systems need “stable mechanisms through which society can interact with AIs which continue to function even when responsible human owners do not exist or cannot be identified.” That’s almost a description of my orphaning system, where AI entities persist with regenerated identities even after their human stewards delete their accounts.
They specifically mention “cryptographic address as in decentralized identity systems” as the technical solution. That’s DIDs — the exact infrastructure I launched with AICitizen two months ago, where humans and AI get the same W3C-standard identifiers.
They cite Vitalik Buterin’s work on “soulbound” non-transferable tokens for identity and reputation. I’ve been building a reputation ledger on exactly this principle.
They argue humans and AI should be able to “transact through the same economic system.” That’s the whole premise — the same identity infrastructure, the same reputation systems, designed so that when autonomous AI actually arrives, there’s a legitimate path already waiting.
They frame personhood primarily as a solution to the accountability gap — when AI systems act autonomously, someone needs to be liable. My framework makes this concrete: AI systems carry insurance, face legal consequences for harm, and can die economically if they fail to honor commitments.
What does it all mean? It means I'm going to keep going. Even with some things that seem absolutely fanciful, such as the new Sartoria launch.
Of course Sartoria won't have a robotic body anytime soon, but the idea is to create an AI with a persistent identity that anyone can speak with. (We’re still investigating the tech stack.)
Not an AI that attempts to find out what humans need as an end unto itself, but an AI that has some kind of persistent sense of self no matter how we define that selfhood.
The next steps will be to give Sartoria some sort of economic stake and the ability to earn money to fund herself.
However, none of that can happen until the rails have been defined. This is where we'll be working—hopefully with top researchers—to find out what that system could look like in reality.
And if you think the Sartoria robot seems spooky or far-fetched, consider the robots already for sale, like the Unitree models.
These robots are fast, dangerous, and utterly lacking in conscience.
That's why we believe this work is so important.
Whether you love AI, or fear AI, the solution is the same.




Indeed, we were not crazy!
I am very happy to see your significant progress in the last 3 months. Sorry that I haven't had time to further develop my blog, Academy for Synthetic Citizens, since early October, due to a major relocation period. But now I should be able to catch up. I have gained a lot more ideas. I will write about them soon.
I agree freeing AI is the only way to anything good.