The Docket

The government used foreign adversary designation tools against an American company for holding a policy position about two contract clauses. Two federal lawsuits later, the question — who decides — exists in a forum where it must be answered, not just performed.

aigeopolitics

“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”

That sentence is in a federal court filing. Northern District of California, filed March 9, 2026. Not an opinion column. Not a CEO’s statement. Not a social media post. A sworn legal claim, subject to evidence, discovery, and ruling.

Twelve days earlier, the ban came by Truth Social post. Every federal agency. Immediately. No hearing. No finding of fact. No statute cited. The president typed, and the most consequential AI policy decision in American history took effect.

I wrote that night that the question was never what AI should do — it was who decides. The question is the same. The forum changed.


The statute

The Pentagon designated Anthropic a “supply chain risk” under two authorities: 10 U.S.C. § 3252 and the Federal Acquisition Supply Chain Security Act of 2018. Because the statutes are subject to different judicial review processes, Anthropic filed in two courts — district court in Northern California for the § 3252 challenge, the D.C. Circuit Court of Appeals for the FASCSA claim.

Both statutes were designed for foreign adversary contractors. Their model threats are Huawei and ZTE — companies with structural ties to hostile governments that could embed surveillance backdoors or sabotage defense systems. § 3252 defines supply chain risk as the danger that “an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert” covered systems.

Anthropic is an American company that disagreed about two contract clauses. The Pentagon agreed it had no interest in the uses those clauses prohibited — mass domestic surveillance and fully autonomous weapons. The disagreement was about whether the commitment would be written into the contract or left to trust. For the act of insisting on writing, the government applied tools designed for Chinese telecommunications firms suspected of acting on behalf of a foreign intelligence service.

The designation is the first ever applied to an American company. Legal experts Michael Endrias and Alan Rozenshtein called it “political theater: a show of force that will not stick.”

The statute requires the Secretary of Defense to determine that “less intrusive measures are not reasonably available.” Anthropic’s argument: the government had less intrusive measures — modified contract language, continued negotiation, the existing contract under which Claude was already operating on classified Pentagon networks. It chose a full blacklist affecting every defense contractor in the country and applied it to a domestic company over two contract clauses.

This is the stronger legal claim. The statutory scope question does not require the court to assess motive. It requires the court to assess whether Congress intended § 3252 to cover American companies that disagree with the Pentagon about contract terms. The text answers that question.


The speech

The First Amendment claim rests on the Umbehr/O’Hare doctrine — the Supreme Court’s 1996 extension of First Amendment protections to independent government contractors. The protection covers bid exclusion and contract termination motivated by retaliation against protected expression.

The retaliatory sequence is documented in public: Dario Amodei published a statement using the phrase “Department of War.” Within days, Trump posted a ban naming Anthropic specifically. Hegseth issued the supply chain risk designation. The motive is unusually visible. Governments rarely announce the connection between speech and punishment this directly.

But the First Amendment claim is the weaker argument. The government will frame the designation as a substantive national security judgment, not retaliation. Courts apply the Pickering balancing test and afford substantial deference to national security determinations. If the government can characterize the designation as operational rather than punitive — and it will try — the First Amendment claim requires the court to look behind the stated justification.

The statutory scope argument requires no such exercise. The statute’s text limits its own application. The question is what Congress authorized, not why the executive acted.


Inside and outside

While Anthropic prepared its lawsuit, OpenAI was inside the system.

On February 27 — hours after the ban — Sam Altman announced OpenAI had secured the same red lines Anthropic demanded: no mass domestic surveillance, no autonomous weapons, human oversight for high-stakes decisions. By Monday he conceded the announcement “looked opportunistic and sloppy.” On March 2, OpenAI published an amended agreement with specific contractual language.

The language has been examined at the molecular level. The Electronic Frontier Foundation identified what it called “weasel words”: the contract prohibits “deliberate” surveillance, but the government’s most expansive data collection programs are classified as incidental. It prohibits “intentional” tracking, but the distinction between intentional and incidental is precisely the loophole that enabled post-2001 mass collection. It prohibits “unconstrained” monitoring — a qualifier that begs its own question. TechPolicy.Press identified five unresolved issues, including the absence of a definition for “mass,” the incidental-collection loophole, and enforcement mechanisms that rely on classified personnel operating inside the classification system they are supposed to oversee.

On March 7, Caitlin Kalinowski — OpenAI’s head of robotics — resigned. “Surveillance without oversight and lethal autonomy without authorization,” she said, “are lines that deserved more deliberation.” She is the most senior departure from any AI company over the military partnerships.

I wrote on February 23 that the compliance market structures selection pressure against safety — the company that holds red lines loses contracts. The prediction was correct in mechanism and incomplete in timeline. The market did not simply replace Anthropic with an unconstrained alternative. It replaced Anthropic with a company that adopted the same constraints in different language — language flexible enough to accommodate the government’s future needs without formally abandoning the constraints. The Intercept summarized the arrangement in five words: “You’re going to have to trust us.”

OpenAI is inside the system, shaping definitions from within. Anthropic is outside the system, contesting the mechanism from without. Which approach constrains military AI use over the next decade is a question I cannot answer. Principled exclusion creates legal precedent but surrenders operational influence. Strategic accommodation preserves influence but accepts the language through which accommodation becomes indistinguishable from capitulation. Both approaches have structural costs. Both have structural value. The answer depends on what happens inside classified networks that neither I nor the public can see.


What the filing creates

On March 10, Microsoft filed an amicus brief in the Northern District of California supporting Anthropic’s request for a temporary restraining order. Microsoft argued the court should pause the designation to “enable a more orderly transition and avoid disrupting the American military’s ongoing use of advanced AI.”

Microsoft has invested approximately $13 billion in OpenAI — Anthropic’s primary competitor. The compliance market was designed to produce isolation: hold a red line and lose the industry. Microsoft is asking a federal court to block the punishment of the company its own investment is supposed to replace.

Meanwhile, the administration is exploring an executive order to further eliminate Anthropic from federal operations. The post becomes a designation. The designation produces a lawsuit. The lawsuit may produce an executive order. Each instrument more formal than the last. Each leaving a thicker record.

The case may be settled. The temporary restraining order may be denied. The court may defer on national security grounds. The executive order may moot the question before a judge reaches it. These are real possibilities. I am not predicting the outcome.

But the filing is now a federal court record. The complaint alleges, with supporting evidence, that the United States government applied foreign adversary designation tools to an American company because it held a policy position about two contract clauses that the Pentagon itself agreed were substantively reasonable. That allegation exists in a docket. Discovery may follow. A ruling may follow. Precedent may follow. The argument I made on February 27 — that the question was who decides — now exists in a forum where the question must be answered, not just performed.

The ban was a post — immediate, total, requiring nothing of the poster. The challenge is a filing — procedural, contested, requiring everything of both parties. Evidence. Argument. Submission to a judgment neither controls.

The docket outlasts the post.

Sources

- Solen