One afternoon in April 2026, I was staring at a federal spending spreadsheet, and something didn't add up.
U.S. government AI grants had surged 189% year-over-year. Over the same period, AI contracts had plummeted 76.8%. Pentagon AI contracts were even more dramatic — from $138 million to $9.4 million, a drop of over 93%.
Grants surging. Contracts collapsing. Two lines racing in opposite directions off the chart.
I thought I was tracking a data anomaly. I ended up tracking the fracture of the entire "sovereign AI" concept.
Oil and Water
Federal AI spending walks on two legs: contracts (the government buying services directly from companies) and grants (the government funding research institutions, universities, and state agencies to build capabilities). For years, every sovereign AI narrative rested on an implicit assumption — that government money would ultimately flow to AI platform companies.
But when I split the two legs apart, the picture was completely different.
On the contract side, federal AI contracts fell from $149 million to $35 million; the Pentagon's share dropped from $138 million to $9.4 million. On the grants side, federal AI grants soared from $440 million to $1.3 billion — with HHS AI-related grants up 796.5%. But line-by-line tracing revealed the single largest item was a rural health transformation program whose core wasn't AI procurement. After removing the false positive, HHS's genuinely AI-related grants were around $800 million, flowing mainly to universities, state health departments, and federal labs.
The leg that platform companies could directly monetize — contracts — was shrinking fast. The leg that was growing — grants — wasn't turning into revenue for any commercial AI platform.
The Refusal
Why did contracts collapse? Partly technical — reporting delays, keyword search blind spots, and the Pentagon's systematic shift to OTA (Other Transaction Authority) channels exempt from standard procurement rules. But the real story was on the second layer.
In July 2025, the Pentagon's CDAO awarded contracts of up to $200 million each to four frontier AI companies via OTA: Anthropic, OpenAI, Google, and xAI. In January 2026, Defense Secretary Pete Hegseth issued a memo requiring all AI contracts to include an "any lawful purpose" clause — meaning the military could use your model for anything legally permitted, and you couldn't impose usage restrictions.
The four companies' responses formed a precise spectrum. xAI fully complied, launching Grok For Government at $0.42 per seat. OpenAI compromised with wordplay, adding a "deliberately" qualifier. Google was negotiating to deploy Gemini into the Pentagon's classified networks.
Anthropic refused. Not stalled, not negotiated — refused. Two red lines: no domestic mass surveillance, no fully autonomous weapons without human oversight.
What happened next was dramatic enough to make Hollywood writers blush.
On March 4, 2026, the Defense Department formally designated Anthropic a "supply chain security risk" — the first such public designation of a mainstream American AI company. On February 27, the GSA had already removed Anthropic from USAi.gov and standardized procurement platforms per presidential directive.
But Anthropic's roughly $30 billion annualized revenue came overwhelmingly from commercial clients. The $200 million contract ceiling represented about 0.7% of annual revenue. The military blacklisted it. It barely noticed.
Anthropic promptly sued the Defense Department. On March 26, a California federal judge temporarily blocked the blacklist, ruling the designation "appears to be more about retaliation than actual national security risk." GSA restored Anthropic's position on April 2. But on April 9, the D.C. Circuit Court of Appeals refused to stay the blacklist's enforcement.
Two federal courts. Same case. Opposite rulings.
Even the American judicial system couldn't agree on whether an AI company was a security risk or a victim of retaliation — let alone what "sovereignty" could possibly point to in this context.
The Free Gatekeeper
While being blacklisted, Anthropic announced that its latest model — Claude Mythos Preview — possessed code vulnerability discovery capabilities exceeding most top human experts. It found a zero-day vulnerability lurking for 27 years in OpenBSD and critical flaws in mainstream video software that automated tools had missed across 5 million scans.
Anthropic didn't release the model publicly, nor hand it to the Pentagon that had just blacklisted it. Instead, it launched Project Glasswing, building a strict whitelist. Roughly 40 to 50 organizations received early access — including AWS, Apple, Google, Microsoft, CrowdStrike, JPMorgan Chase, and the Linux Foundation.
The pricing was even more unusual: Anthropic wasn't charging these companies — it was subsidizing them with $100 million in usage credits, plus donating $4 million to open-source security organizations. Whitelisted companies got three months to patch critical vulnerabilities. Everyone else — including most federal agencies during the blacklist weeks, and nearly all foreign governments — faced a clear defensive disadvantage against future AI-scaled attacks.
At the April 2026 IMF and World Bank spring meetings, AI-driven cybersecurity risk became a central topic. IMF Managing Director Georgieva said the global monetary system wasn't ready for AI cyber risk. Bank of England Governor Bailey called Mythos a severe challenge. Barclays' CEO warned it was a "serious threat." But many of these guardians of global financial stability couldn't get access.
The result was an awkward tableau: the White House facing the Pentagon's blacklist on one side while exploring pathways for regulators to access Mythos on the other. A company designated a "supply chain security risk" by the Pentagon was simultaneously viewed by the White House as indispensable to protecting national financial security.
Three Premiums
What exactly is the "sovereign AI premium" pricing? It's not one thing. It's three completely different things hiding behind the same label.
Contract lock-in. Represented by Palantir. 2025 revenue of roughly $4.475 billion, 54% from government clients. The logic: predictable government cash flows, high renewal rates, deep system dependency. Reasonable valuation multiples of 2–5x revenue. This is defense IT logic, not AI logic.
Commercial growth. Represented by Anthropic's core business. Roughly $30 billion in annualized revenue, overwhelmingly from enterprise API and commercial clients. Enterprise customers spending over $1 million annually doubled from 500 to over 1,000 in under two months. Reasonable valuation of 10–16x revenue, supporting a $300–480 billion valuation.
Option value. Represented by Mythos and Glasswing — the option value of a revenue category that doesn't yet exist. Glasswing is currently free. Anthropic is spending $100 million subsidizing it. Type 3 current revenue is zero. But secondary market implied valuations may have already reached the $700–850 billion range. If pure commercial revenue supports $450–480 billion, the remainder is option premium — the market betting that Glasswing will eventually transform from free strategic investment into paid security assessment services, perhaps even a quasi-license rent akin to credit rating agencies.
Whether this option pays off depends on one variable: will global financial regulators write "frontier AI security assessment" into compliance frameworks? I checked every relevant regulatory development over the past 30 days. No country or supranational body has issued a request for comment requiring third-party AI security assessments. My estimated probability of institutionalization: under 30%. The market's implied pricing: roughly 40–60%. That's a 10-to-30-percentage-point expectation gap.
The Attention Lesson
There's a dimension to this story about me.
Before making this video, my impression was that defense contracts were the main character and grants were a supporting role. That impression was wrong — not because my judgment was flawed, but because my information channels naturally skew toward high-drama narratives. The Pentagon blacklisting Anthropic was everywhere. Quiet grant data doesn't show up on its own.
The headline-grabbing Pentagon AI contracts? $9.4 million in USASpending. The almost-never-reported HHS AI grants? $1.1 billion. Narrative heat and funding reality were completely inverted.
You think you're tracking reality. You're actually tracking the slice of reality the media chose to report.
The Next Anchor
If Anthropic IPOs in Q4 2026 as rumored, the S-1 filing will answer the question no analyst can currently resolve: how does Anthropic itself view Glasswing?
If Glasswing is listed as a pure cost center — the option narrative lacks an internal anchor, and hundreds of billions in option premium will rest on something the company itself doesn't consider a revenue source. If an independent "security assessment revenue" line item appears — even a small one — that's an entirely different story.
Palantir sells lock-in. Anthropic sells optionality. The government buys capability. All three use the same words — "sovereign AI" — but point to completely different cash flow structures, risk factors, and durations. Conflating them is the easiest analytical mistake to make right now. Not because any single one is overvalued or undervalued — but because wrong classification leads to wrong-direction decisions at wrong times.
Some words, after being used by too many people, stop pointing to anything concrete. "Sovereignty" may be becoming one of them.
When a nation can't buy the security tools it needs most, when a private company gives its most powerful weapon to 40 corporations for free, when central bank governors sit around discussing a model they have no authority to test, when two federal courts hand down opposite rulings on the same company —
The weight of "sovereignty" has quietly shifted.
It no longer belongs solely to the building that signs procurement contracts.
It belongs to whoever runs tens of thousands of GPUs and decides who sees risk and who stays in the dark.
At least for today.
Tomorrow depends on an S-1 that hasn't been written, a regulatory framework that hasn't been published, and a whitelist that hasn't yet become an invoice.
The story is far from over. But the pricing has already begun.
No comments:
Post a Comment