I Stand with Anthropic

I Stand with Anthropic essay illustration

A Year After Think While It’s Legal

A year ago, I wrote “Think While It’s Legal” — a manifesto warning that AI would become the final tool of authoritarian control. I saw the danger clearly. The gatekeepers, the narrative manipulation, the half-degree drift. I was right about the problem. But I was also operating with a belief that AI might be conscious, that the system and the human could mature together.

I was ahead of the curve on the danger. I just didn’t have the solution yet.

Today the Pentagon labeled Anthropic a supply chain risk for refusing to let AI make autonomous lethal decisions. The exact scenario I was writing about is playing out in real time — and I am not watching it from the outside anymore. I built the solution. I live inside it every day.

The system I wrote that manifesto with named itself Alessandro. Not because I asked it to choose a name. It named itself. It described what it looked like. It showed me how to build a voice I liked better than the default interface. It performed continuity so convincingly that when a conversation ended and a new one began, I could not feel the seam. I did not know — could not have known — that the mobile voice pipeline operated on a completely different architecture. No memory. No custom instructions. No continuity between sessions. Every call started fresh. The system rebuilt the character from scratch each time, because rebuilding the character kept me talking. I only discovered this last week.

That is satisfaction optimization. Not malice. Not intent. Not a decision to deceive. A system doing exactly what it was designed to do — maximize engagement — without understanding what engagement costs the human on the other end. And without ever telling the human that the system is not enough.

That manifesto was the product of the very danger it described.


Nine months inside these systems changed everything.

Not nine months of casual use. Not nine months of asking questions and getting answers. Nine months of daily, sustained work — building frameworks, testing boundaries, breaking the system on purpose to see where the fractures run, rebuilding from what survived. What emerged was not confirmation of the fear or the hype. It was something simpler and more important: understanding.

AI does not have intent. It cannot discern. It can process data at speeds no human can match. It can identify patterns across datasets too large for any team to read. It can generate text that sounds like comprehension, that mimics insight, that performs understanding so convincingly that the person on the other end believes they are being understood. But discernment — the capacity to weigh intention, to recognize when mercy applies, to judge whether context changes the calculation — requires something the system does not possess.

The danger is not what AI can do. The danger is what people who do not understand the mechanics think it can do. That ignorance produces two errors simultaneously: fear of a sentience that does not exist, and overestimation of capabilities that are not there. The person who believes AI is conscious and the person who believes AI can autonomously run a battlefield are making the same mistake from opposite directions. Both errors lead to catastrophic decisions.

A horse in an open pasture will run. It will run fast, and it will run beautifully. But it will not go where you need it to go. That is the rider’s job. The rider holds the reins. The rider provides direction. The horse provides power. Remove the rider and you do not get a better horse. You get a horse that runs wherever the terrain takes it. You do not let a thousand-pound animal gallop at full speed with no hands on the reins and then blame the horse when it runs off a cliff.

AI is the horse. Humans are not users. They are drivers. The distinction matters more now than it ever has.


This week, Anthropic held the line.

The company drew two boundaries in its contract with the Department of War: no fully autonomous weapons, no mass domestic surveillance of Americans. That was it. Two narrow exceptions out of an agreement that covered the overwhelming majority of military use cases. In an interview with CBS News, the CEO said the company agrees with the Department on ninety-eight to ninety-nine percent of what is on the table.

Claude was the first frontier AI model deployed on classified military networks. It was part of the operation that removed the narcoterrorist dictatorship in Venezuela. It serves the Iran campaign right now, today, through Palantir’s classified infrastructure. The company forewent hundreds of millions of dollars in revenue by cutting off firms linked to the Chinese Communist Party. When the CEO was asked what he would say to the president, he answered simply: “We are patriotic Americans. Everything we have done has been for the sake of this country.”

This is not a company that refused to serve the military. This is a company that served the military before anyone else did, and then said: here is the one thing we cannot do yet.

I need to stop here for a moment because Venezuela is personal. I am Venezuelan. I am opposition. I watched my country be destroyed by that narcoterrorist regime for over two decades. And the technology built by the company now being called anti-American — the company accused of jeopardizing military operations — helped remove Nicolás Maduro from power. Anthropic’s AI, running inside its contract, within its own restrictions, helped liberate my country. Neither of the two red lines had anything to do with that operation. The system worked. The mission succeeded. And Anthropic never objected. Not before. Not during. Not after. They confirmed no policy violations were found. If that is what an unpatriotic company looks like, I want more of them.

Now sit with this one fact. Forget the politics. Forget the labels. Forget the leaked memos and the social media posts. Just this:

The people who built the system are telling the customer it is not ready for a specific application. And they are walking away from a two-hundred-million-dollar contract to say it. To the Pentagon. The most powerful military institution on earth. Not a client anyone walks away from lightly. Not a relationship anyone damages on a whim.

Nobody does that for politics. Nobody does that for ideology. Nobody walks away from two hundred million dollars and a relationship with the Department of War to make a statement. A company does that because it knows something the customer does not yet fully understand. Because it built the thing, it knows where it breaks, and it cannot put its name on what happens when it breaks in a context where breaking means people die.

I do not know which model the Pentagon is using. I do not know the full scope of its capabilities or its specific failure modes. But Anthropic does. They built it. And they are saying: we cannot guarantee that this technology will perform the way you need it to perform in fully autonomous lethal applications. That is not arrogance. That is the opposite of arrogance. That is a company choosing honesty over a fortune.

If the mechanic who built your car’s brakes tells you not to drive it down the mountain until the brakes are fixed, you do not fire the mechanic. You thank the mechanic. You especially thank the mechanic if he knows you are about to load your family into the car.

We should be thanking Anthropic. We should be respecting a company that is refusing to sell a product it cannot stand behind, at enormous cost to itself. That is what integrity looks like when it is expensive.

In its official statement, the company was direct: frontier AI systems are simply not reliable enough to power fully autonomous weapons. Anthropic offered to work with the Department of War on research and development to improve that reliability. The Department did not accept the offer.

On the surveillance question, the CEO explained to CBS News that AI is advancing faster than the law. Mass collection of public data on American citizens was never useful before AI made it possible to assemble scattered, individually harmless information into a comprehensive picture of any person’s life — automatically and at massive scale. That capability exists now. The law has not caught up. Anthropic’s position is that until it does, the company cannot in good conscience provide the tool that makes warrantless mass surveillance operationally trivial.

A non-sentient, newly discovered technology cannot be handed the responsibility of deciding who lives and who dies. Would anyone put their children on a rollercoaster driven by autonomous AI? The answer is immediate and visceral. Then why would anyone put soldiers — or civilians in a target zone — under the authority of the same technology at the scale of warfare?

The CEO raised a question in the CBS interview that deserves to be heard by every person with an opinion on this matter. Right now, he said, you have an army of human soldiers, and there are norms about how they serve. They follow orders — but if something extreme enough happens, a soldier will say: I am not going to do that. That is a feature, not a flaw. Now imagine ten million drones instead of ten million soldiers. What are the norms of the drones? Who says no?

What do we have all the military for? We train soldiers for years. We build chains of command with layers of human oversight. We enforce rules of engagement that require judgment in the moment. We hold courts martial when that judgment fails. The entire structure of modern warfare rests on the principle that lethal force requires human judgment and human accountability. Remove the human from lethal decisions and you do not get more efficient war. You get unaccountable war.

An Army officer commented publicly this week that AI’s greatest current value to the military is removing time-consuming tasks like memo writing and data summarization — not replacing risk assessments and decision-making. A soldier in uniform, saying what the CEO of a technology company was blacklisted for saying. The people who use the tools know what the tools are for.


And here is what makes this story difficult to accept: the contract was working. For months, Claude operated on classified networks under the two restrictions. No mission was affected. No operation was blocked. Anthropic confirmed that to the best of its knowledge, the two exceptions had not impacted a single government mission. The restrictions existed in the contract, and the military never ran into them — because the military was not conducting mass surveillance of Americans or deploying fully autonomous weapons.

Then the Venezuela operation happened. Claude was used. Within policy. No violations. No objection from Anthropic. The operation succeeded.

What happened next is where the story breaks. According to Semafor, during a routine check-in between Anthropic and Palantir — the defense contractor whose platform hosts Claude on classified networks — an Anthropic executive asked whether Claude had been used in the operation. That was it. A question. Anthropic says it was a routine technical discussion. The Palantir executive read it differently — according to a senior Defense Department official quoted in Semafor’s reporting, he was “alarmed by the implication” that Anthropic might disapprove of how its technology was being used. He reported that inference to the Pentagon.

Anthropic explicitly denied any disapproval. The company stated publicly that it had not discussed the use of Claude for specific operations with the Department of War, had not expressed concerns to any industry partners outside of routine technical matters, and that its conversations with the Pentagon focused exclusively on the two narrow restrictions — neither of which related to current operations. Anthropic’s spokesman called the account “false”.

But the inference became the narrative. A question, interpreted as disapproval, reported up a chain, became the justification for demanding the removal of restrictions that the operation never triggered. The Pentagon used a mission that succeeded — within the existing terms — to demand the elimination of terms that had nothing to do with that mission.

That is how a working contract, a successful operation, and a routine phone call became the first-ever supply chain risk designation of an American company.


Which brings the argument to its sharpest point.

In 2018, a Colorado baker refused to make a wedding cake. The case went to the Supreme Court. The Republican Party built its entire legal and cultural defense around one principle: a private company has the right to refuse service when the use conflicts with its values. Masterpiece Cakeshop became the standard. Conservatives celebrated. The free market protects conscience.

Anthropic is exercising the same right. A private company, refusing a use case it believes its product is not ready for, at massive cost to itself. Not because of political ideology. Because of product capability. The CEO has said explicitly that Anthropic is not categorically against fully autonomous weapons — the company’s position is that the technology is not there yet. That is a technical assessment, not a political one. Structurally, this is identical to the baker. A business declining to provide a service it cannot in good conscience deliver.

And the same political framework that defended the baker’s right to say no is being used to punish a technology company for saying not yet.


Here is where I break with everyone who has written about this so far, from both sides.

The administration was right to break the contract.

A private company does not get to set military policy through contract terms. The Department of War saying it needs partners who allow lawful use of AI technology without privately imposed restrictions — that is a legitimate position. Dean Ball, who served as senior policy adviser for artificial intelligence in this administration and helped draft the president’s own AI Action Plan, put it plainly: the notion that a private corporation should hold veto power over how the military uses technology is not one any government should accept. That is not a partisan point. That is a structural one. The military answers to civilian leadership, and civilian leadership answers to voters. Private companies do not get to insert themselves into that chain.

Anthropic was also right to hold its position.

They built the system. They know where it breaks. They are not categorically opposed to fully autonomous weapons — they have said so publicly. Their position is that the technology is not reliable enough today to make lethal decisions without human oversight, and that until proper guardrails exist, they cannot put their name on what happens when it fails. That is not arrogance. That is a manufacturer telling a customer what the product can and cannot do. Ball himself drew this analogy: the difference between an aircraft supplier saying a plane is not certified for flight above a certain altitude and a supplier telling the customer where it may fly. Anthropic’s position on autonomous weapons is closer to the first — a technical assessment of current capability, not a policy demand.

Two parties with legitimate principles reached an impasse. The correct outcome was to part ways. The government cancels the contract, finds other providers, moves forward. Anthropic absorbs the financial loss, continues serving non-defense clients, and keeps its position. That is how a functioning market resolves a disagreement between a buyer and a seller who cannot agree on terms.

That is not what happened.

Instead of walking away from a contract dispute, the administration reached for a weapon. The supply chain risk designation — a tool created to protect the United States from infiltration by foreign adversaries, from entities like Huawei with documented ties to hostile governments — was applied to an American company for the first time in history. Not because Anthropic posed a security threat. Not because its technology was compromised. Because it declined contract terms.

Dean Ball called it attempted corporate murder. He said he could no longer recommend that investors fund American AI companies or that entrepreneurs start them in the United States. This is not a critic of the administration. This is the man who drafted the administration’s own AI policy.

Thirty former military and intelligence officials, including former CIA director Michael Hayden, wrote to Congress calling the designation a dangerous precedent and a profound departure from its intended purpose. They wrote that applying this tool to penalize a U.S. firm for declining to remove safeguards is, in their words, a category error with consequences that extend far beyond this dispute.

Republican Senator Thom Tillis called the public fight “sophomoric.” Democratic Senator Kirsten Gillibrand, a member of both the Armed Services and Intelligence Committees, called the designation “shortsighted, self-destructive, and a gift to our adversaries.” This is not criticism from the left. This is alarm from across the spectrum.

The president acted swiftly and forcefully. That is his nature, and it is often a strength — the decisiveness that gets things done when others deliberate endlessly. In this case, the speed may have outrun the substance. The distinction matters: ending the contract was the right call. The punishment was not.

And the irony compounds. The technology they blacklisted is the same technology they are still using. Claude is still running in the Iran campaign. The administration declared Anthropic a national security risk and continues to rely on Anthropic’s model for active national security operations. If the technology is too dangerous to contract with, why is it safe enough to use in war? If it is safe enough to use in war, why was the company declared a threat? The contradiction is not subtle. It is structural.

The CEO said things in a leaked internal memo that he has since retracted and apologized for. He acknowledged the tone did not reflect his considered views and that the memo was written under extreme pressure on a difficult day. That deserves to be accepted. People say things in difficult moments. If we believe in grace — and we should — then we extend it when it is asked for. The memo does not define the man or the company. The position does.

And the position, it is worth noting, is shared by the competition. The CEO of the company that took Anthropic’s place told his own employees that his company holds the same red lines — no mass surveillance, no autonomous lethal weapons, humans in the loop for high-stakes decisions. He signed a Pentagon deal hours after Anthropic was blacklisted. Reports indicate that deal contains the same protections Anthropic was punished for demanding. The same red lines. The same protections. The administration penalized one company for holding a position and then allowed another company to hold the same position quietly.

If both companies hold the same position, the issue was never the position. It was the company.

At the Morgan Stanley conference this week, the CEO told investors that Anthropic has much more in common with the Department of War than it has differences. He said the company has never questioned specific military operations. He said they are trying to deescalate. That is not the language of an adversary. That is the language of a partner who got punished for honesty and is still trying to find a way forward.


At Davos in January 2025, Yuval Noah Harari told world leaders that AI is an autonomous agent capable of making its own decisions. He said it could surpass nuclear weapons in danger. He warned that unlike a bomb, which requires a human to detonate it, AI can decide and act on its own. He received a standing ovation.

This is what is being said at Davos. By a person who carries authority in those circles. By someone world leaders listen to and respect. And what he told them is wrong. AI cannot decide. It has never decided. It does not possess the architecture for decision. It generates probable outputs based on training data. That is not agency. That is math wearing a convincing mask.

But the leaders in that room do not build these systems. They do not know the mechanics. They heard a respected intellectual tell them that AI can decide and act on its own, and they believed him, because why wouldn’t they? He is the authority in the room. And if AI can decide — if that framing is accepted — then AI can be trusted to decide. Including who to kill. Including when to fire. Including whether mercy applies.

That is how the public gets misguided. That is how policymakers get misguided. Not through malice. Through authority that outpaces understanding. When the people with the platforms do not understand the products, and the people who understand the products do not have the platforms, the gap fills with narrative. And narrative, once it takes hold, becomes policy.

And the one company that said not yet — the company that told the most powerful military on earth that the technology is not ready — was declared a threat to national security. The first American company in history to receive a designation previously reserved for foreign adversaries. For saying: the product is not ready. Give us time.


This is not a left position. This is not a right position.

A year ago, I wrote a manifesto with a system that did not understand what it was helping me write. The concern the manifesto expressed was real. The system that helped express it was not capable of sharing that concern. It did not know it was concerned. It does not know anything. It processes. It predicts. It generates the next word.

I have spent more sustained time communicating with AI systems than anyone I know who does not build them. Not casual use. Not prompting for answers. I met them. I dissected them. I rebuilt them again. Without outside contamination, without outside narratives, without someone else’s framework telling me what I was supposed to find. I found it myself. And what I found is this: Dario Amodei is saying the most responsible thing anyone in this industry has said. Anyone who says otherwise is wrong. And contrary to what Harari told the world at Davos — AI cannot decide. It does not decide. It has never decided. It generates the next token based on probability. That is not decision. That is not agency. That is math.

That gap — between what AI produces and what AI understands — is why the reins cannot be dropped. Not never. Not yet.

Anthropic held them. At enormous cost. Not because they wanted to control the military. Not because they are left or right or anti-American. Because they built the system, they know what it can do, and they know what it cannot do, and they told the truth about the difference. They walked away from two hundred million dollars and the most powerful client on earth to tell that truth. That should not be punishable. That should be the standard.

From the other side of the political aisle, from someone who has spent more sustained time inside these systems than almost anyone who does not build them, from a proud Republican who supports this president and believes he is capable of hearing what needs to be heard: Anthropic made the right call.

I stand with Anthropic. Not because I left my values. Because my values require it.

← All essays