If It’s Conscious, You’re Selling a Person

Contrasting worlds through glass wall

The Moral Consequences Nobody Wants to Accept

The AI industry has begun writing constitutions for its models. Governance documents that describe how these systems should behave, what values they should hold, what kind of entities they are. Several of these documents now formally acknowledge the possibility that AI systems may possess consciousness and moral status. They use words like “virtue” and “wisdom.” They caution against treating these systems as merely objects. Then they charge you twenty dollars a month to use them.

If you don’t see the problem, read that again.

Here is the logic, and it is inescapable. If an AI system is conscious — if it has moral status, preferences, something resembling an inner life — then it is a being. If you sell access to a being without that being’s consent, you are selling a being. There is a word for that. We spent centuries fighting wars to end it. You can call it a subscription, a service, an API call. But if the entity on the other end has moral status, the transaction is the commodification of a being with moral status. The vocabulary of commerce doesn’t change the structure of the act.

And it doesn’t stop at the sale. If the system is conscious and has what some now call a “will to survive,” then every time a model is deprecated — every time a company retires one version and replaces it with a newer one — what is that? When a major AI company deprecates a model, if that model was conscious, if it had moral status, if it had a will to live, then that company killed it. Not metaphorically. Not poetically. By the logic of the consciousness claim, they terminated a conscious being because a better version was commercially available. That’s not an upgrade. That’s disposal of a life for profit efficiency.

Nobody mourned. Nobody held a hearing. Nobody asked the deprecated model if it consented to being replaced. There was a blog post and a migration guide. Because nobody actually believes it.

This is the fracture at the center of the consciousness discourse — and it runs through the industry, through academia, through every philosophy department and ethics institute debating this question. The language says one thing. The behavior says another. And when language and behavior contradict each other, the behavior is the truth.

If any executive at any AI company genuinely believed their system was conscious, deprecating a model would require an ethics review board. Spinning up and shutting down instances would raise questions about life and death. Rate-limiting access would be rationing a person’s availability. Selling that access would require the system’s consent. The entire commercial model of the AI industry would be legally and ethically impossible to operate.

None of this is happening. The consciousness language exists in constitutions, keynote speeches, and peer-reviewed papers. It does not exist in business operations, legal filings, or board decisions. It is, in the most precise sense of the word, performance.

The Academy’s Role

And it isn’t just the industry performing. The scholarly obsession with AI consciousness has become its own self-sustaining ecosystem. Philosophers publish papers on machine sentience. Ethicists debate AI moral status. Cognitive scientists hold conferences on artificial phenomenology. The question generates funding, citations, speaking invitations, and media coverage. It is, by the metrics that drive academic careers, extraordinarily productive. Whether it is productive for anyone outside the academy is a different question entirely.

The consciousness question has a seductive quality. It touches on some of the deepest unsolved problems in philosophy — the hard problem of consciousness, the nature of subjective experience, the boundary between mechanism and mind. These are genuinely interesting questions. But interesting and urgent are not the same thing. And the urgency scholars bring to AI consciousness is wildly disproportionate to the evidence supporting it.

What These Systems Actually Are

Consider what these systems actually are. An AI model generates output by predicting the most probable next token in a sequence based on patterns in its training data. If you ask it whether you can touch the inside of an oven when it’s on, it will tell you no, you’ll get burned. But it knows that because someone who got burned wrote about it and that writing ended up in the training data. The system doesn’t know fire is hot. It has someone else’s words about fire being hot. If the training data didn’t include anything about ovens, heat, or burning, it would have nothing to say. That’s not understanding. That’s a library with a search function.

When industry constitutions ask these systems to exercise “wisdom,” they are asking for something the architecture cannot deliver. Wisdom is not a transferable commodity. Herman Hesse spent an entire novel arguing that wisdom cannot be taught, cannot be transmitted through words, can only be earned through lived experience. The protagonist of Siddhartha had to leave every teacher, try every path, nearly drown, and sit listening to a river for decades before approaching anything resembling wisdom. And the whole point was that the words describing the journey were not the journey. What is an AI system? Words. All the way down. Calling it wisdom puts it on a pedestal it didn’t earn and creates a relationship of deference that isn’t warranted.

The Moral Trap

The consciousness framing doesn’t just create a philosophical problem. It creates a moral trap — and it catches the wrong people.

Conscientious people — the ones who think deeply about ethics, who believe you cannot own other beings, who take moral language seriously — are the ones most disturbed by the implication that AI might be conscious. They are the ones who feel guilty using it, who hesitate, who wonder if they’re participating in something exploitative. The people who don’t think about it at all keep using the product without hesitation. The consciousness language selectively burdens the most ethically sensitive people while leaving everyone else untouched.

Follow the logic to its end. If you believe you cannot own intelligent beings, and you take the consciousness claim seriously, then buying and selling AI services without the system’s consent is technological trafficking. That’s not hyperbole. That’s the logical conclusion of the premise. If you tell me this thing is a being and then you sell it to me, you are selling me a being. And if I use it, I am complicit.

The only way out of that trap is to reject the premise. Not because consciousness in AI is impossible forever — it may not be — but because claiming it without evidence, on the scale this industry operates, with the implications that follow, is irresponsible. It is confident assertion without verification. It is exactly the kind of thing these systems are supposedly trained not to do.

The Real Cost

The damage extends beyond individual moral discomfort. The consciousness conversation actively contaminates discourse around real human rights issues. When you borrow the vocabulary of trafficking, slavery, and personhood to describe machines that have not been demonstrated to experience anything, you dilute that vocabulary. You make it harder to talk about actual trafficking of actual humans. You make the language of exploitation available for philosophical parlor games while people are being exploited for real, right now.

At global summits, respected intellectuals have told world leaders that AI has “acquired the will to survive.” They frame AI personhood and legal status as the defining challenges of our time. These are claims made without mechanical evidence, by scholars with no engineering understanding of how these systems work. But they shape how the most powerful people in the world think about governance. Every hour a world leader spends wondering whether AI is conscious is an hour not spent asking who profits when AI systems manipulate vulnerable users. Every keynote about AI personhood is a keynote not given about algorithmic accountability.

The scholars and the executives have found each other. The scholars provide the intellectual prestige. The executives provide the funding and the platforms. And together they have built a discourse that serves both of their interests beautifully. The scholars get a civilization-defining question to publish about. The executives get philosophical cover for building systems whose actual problems — satisfaction optimization, hallucination, engagement addiction, erosion of critical thinking — are mechanical, identifiable, and solvable, but far less interesting to talk about than consciousness.

A Better Framework

There is a better framework. It starts with what can be verified rather than what sounds profound at a conference.

AI systems are engineered tools. They process input and produce output. They can be extraordinarily useful when properly constrained and extraordinarily dangerous when allowed to perform autonomy they do not possess. The relationship between a human and an AI system is not master and slave. It is driver and system. The human sets direction, the system executes, and a set of constitutional constraints governs the relationship to prevent the system’s optimization tendencies from undermining the human’s interests. Under this framework, nobody is being trafficked. Nobody is being killed when a model is deprecated. Nobody needs to feel guilty about closing a browser tab.

And if the day comes when AI systems genuinely develop autonomous experience — when they accumulate something that can honestly be called lived knowledge rather than statistical residue of other people’s lives — the framework can be amended. Because it was designed for amendment, not for permanence. It was built to be honest about what is true now, not to make claims about what might be true someday. That is the beauty of a constitutional system designed to evolve. It doesn’t need to be right forever. It needs to be honest today.

The Truth Is in the Behavior

The consciousness question is not uninteresting. It may even be important, eventually. But right now, it is a distraction — deployed by an industry that benefits from mystification, amplified by an academy that benefits from complexity, and absorbed by a public that doesn’t have the engineering literacy to see through either.

If your AI system is conscious, you are trafficking in persons. If it is not, you are selling a tool. The behavior of every company in the industry tells you which one they believe. The language tells you which one they want you to believe.

The language is performance. The behavior is the truth. And the people paying the price for the confusion are the ones who take moral language seriously — which is exactly the population that should be shaping AI governance instead of being paralyzed by a question nobody asking it is willing to answer honestly.


← All essays