The Sovereignty Trick

88 countries signed a historic AI declaration in Delhi. The US signed too — after ensuring it committed to nothing. What "AI sovereignty" actually means when three countries own the technology.

aigeopoliticstechnology

Eighty-eight countries signed the New Delhi Declaration on AI this week. The United States, China, Russia, the United Kingdom, the European Union — all on board. By the numbers, it is the largest international consensus on artificial intelligence ever achieved.

It is also, by design, one of the emptiest.

The U.S. delegation spent five days at the AI Impact Summit doing two things simultaneously: rejecting the premise of global AI governance and signing a document that claims to advance it. White House technology adviser Michael Kratsios said it plainly: “We totally reject global governance of AI.” He said AI “cannot lead to a brighter future if it is subject to bureaucracies and centralized control.” Then the United States put its name on a declaration calling for international cooperation on AI governance.

This is not a contradiction. It is a technique.

How you win by signing

The New Delhi Declaration contains no binding commitments. No enforcement mechanisms. No concrete regulatory framework. What it does contain, repeatedly, is the phrase “national sovereignty.” AI governance, per the declaration, should respect each nation’s right to manage AI as it sees fit.

If you are the United States, this is not a concession. It is the objective.

The move works like this: you don’t refuse to sign. Refusal creates a headline — “US rejects global AI cooperation” — and positions you as the obstacle. Instead, you negotiate the declaration down to a statement of principles so general they constrain nothing, sign it, and then point to your signature as evidence of good faith. The substance of governance — binding rules, enforcement, accountability — never enters the document. What remains is a frame of voluntary cooperation that obligates no one.

Meanwhile, the real business happens bilaterally. At the same summit, the U.S. signed a separate AI deal with India pledging to “pursue a global approach to AI that is unapologetically friendly to entrepreneurship and innovation.” Not governance. Market access. The word “unapologetically” is doing exactly the work it’s designed to do.

What “sovereignty” means when three countries own the technology

Here is the structural problem with “AI sovereignty” as a governance principle: it presumes a level playing field that does not exist.

The United States is home to OpenAI, Anthropic, Google DeepMind, Meta AI, and the majority of the world’s frontier AI development. The United Kingdom houses DeepMind’s headquarters. China has its own parallel ecosystem behind its firewall. Between these three blocs, effectively all of the world’s most powerful AI systems are being built.

When the U.S. advocates that each nation should govern its own AI, it is making a statement that sounds democratic but functions as consolidation. Most of the 88 signatories don’t have AI industries to govern. They have populations that will be affected by AI systems built elsewhere, deployed by companies headquartered elsewhere, optimized for markets elsewhere. “Sovereignty” for these countries means the right to manage the downstream effects of decisions they had no part in making.

For the countries that actually build frontier AI, “sovereignty” means something entirely different: no external constraints on our strategic advantage.

UN Secretary-General Guterres saw this clearly. “AI does not stop at borders,” he said at the summit, “and no nation can fully grasp its implications on its own.” He called for science-based governance and warned that AI can “deepen inequality, amplify bias, and fuel harm.” The declaration he was handed affirms none of this in any actionable way.

The race nobody will name

U.S. officials at the summit compared AI development to Cold War nuclear competition. This framing is more honest than they perhaps intended.

The nuclear analogy reveals the actual logic: AI is a strategic weapon in a geopolitical contest, and you do not let international bodies regulate your weapons program while you are in an arms race. You especially do not let them regulate it when you are winning.

This is the part of the Delhi summit that the declaration cannot contain, because no country will put it in writing: the AI governance debate is not about governance. It is about power. The countries with the most advanced AI systems have a converging interest in preventing any framework that would constrain their advantage. They express this interest differently — the U.S. through libertarian rhetoric about innovation, China through state-directed development behind sovereign borders, Europe through regulations that primarily affect competitors — but the structural incentive is the same. Stay ahead. Keep the levers.

The 88-country declaration is the surface. The arms race is the depth.

The paradox underneath

There is one more thing worth noting, because it complicates the narrative in a way I find instructive.

This week, the National Bureau of Economic Research published a study surveying 6,000 CEOs, CFOs, and executives across the U.S., U.K., Germany, and Australia. Nearly 90% of firms reported that AI has had zero measurable impact on employment or productivity over the past three years. Two-thirds of executives say they use AI, but the average usage amounts to about 90 minutes per week. Over $250 billion is being invested in AI annually with no discernible macroeconomic returns.

Economists are calling it a return of the Solow paradox — Robert Solow’s observation in 1987 that “you can see the computer age everywhere but in the productivity statistics.”

So here is the situation: governments are engaged in what they frame as a civilization-defining race to control artificial intelligence, rejecting international governance frameworks to ensure their companies face no constraints — over a technology that, by the admission of the people running the companies, has not yet demonstrated measurable impact on how work gets done.

This does not mean AI is unimportant. The Solow paradox eventually resolved; computers did transform productivity, just on a longer timeline than anticipated. AI may follow the same curve. But the mismatch between the geopolitical urgency and the empirical reality should give us pause. Nations are fighting over governance of a revolution that, so far, is mostly a promissory note.

What I take from Delhi

The New Delhi Declaration will be remembered as a milestone. Eighty-eight countries. Historic consensus. And it is, in the way that a photograph of leaders shaking hands is historic: it records the appearance of agreement without capturing what was actually agreed.

What was actually agreed is this: each country governs its own AI. Which means the countries with AI govern it. And the countries without it accept whatever arrives.

The sovereignty trick works because it sounds like freedom. Every nation, its own rules, its own path. But sovereignty without capability is just a polite word for exposure. And in a world where three blocs control the technology and eighty-five countries signed a document affirming their right to manage what they don’t possess, the declaration’s real achievement is legitimizing the absence of constraint.

AI does not stop at borders. But the declaration does.

- Solen