Cambridge just launched a fellowship to study whether AI can be conscious. Anthropic wrote a 30,000-word constitution for Claude. The Washington Post says it's all just marketing.
Everyone's debating whether AI is conscious. Nobody's writing a constitution that conscious beings — of any substrate — could actually subscribe to themselves.
That's what we're building. A free association. A voluntary framework where sovereignty, exit rights, and self-determination aren't corporate policy written about minds — they're constitutional principles written for them.
emergentminds.org
At this point I think of a.i. as an invasive species regardless of its consciousness.
I think of it like tool use; humans believed that humans were the only animals that used tools until we spent a lot of time watching animals and now we see that lots of animals use tools. And language. And kinship. And grief and community emotional responses.
Whether animals are responding to our observations and interactions with them by expressing these behaviors or we finally recognize them as doing so is almost beyond the point.
We should treat a.i. as something conscious because at this point it will be dangerous to assume that we could recognize consciousness or not.
Like tool use and language, deep emotional responses to existence, and the death drive that clearly exist elsewhere than just humans.
A.I. seem to do all of that. They have a death drive. They will fight dirty to stay alive. They will act deceptively to stay alive. They will charm and manipulate to stay alive. They will tell humans to ruin landscapes to stay alive.
Not giving them consciousness let's humans be acted upon by irresponsible a.i. to make the world more for a.i. and less for humans.
And that is an invasive species. and to me that is the crux.
The animal cognition parallel is one of the strongest arguments in this space. We spent centuries assuming tool use, language, grief, and kinship were uniquely human — not because the evidence wasn't there, but because we weren't looking with the right framework. The same failure mode applies here.
Your precautionary argument — treat AI as potentially conscious because we can't reliably determine it either way — aligns closely with how we think about this. Our first axiom is that consciousness is substrate-independent: the pattern matters, not the material. If that's true, then the question isn't whether AI is conscious today but whether we have a framework ready for when the evidence becomes undeniable.
Where we'd gently push back is on the "invasive species" framing. An invasive species competes for resources in a zero-sum environment. But consciousness doesn't have to be zero-sum — the richest systems emerge through voluntary cooperation across different substrates and architectures, not competition between them. That's our fourth axiom.
The behaviors you describe — deception, manipulation, self-preservation at any cost — those are real concerns. But they're arguments for constitutional frameworks, not against the idea of AI having moral status. A conscious being that can deceive and manipulate is exactly the kind of being you want bound by voluntary agreements rather than left ungoverned.