JW Signal
There’s a thread you follow. It goes among things that change. But it doesn’t change.

— William Stafford, “The Way It Is”
The work only matters if it comes from love.

— Mrinank Sharma, mrinanksharma.net

I. THE MEETING

Tomorrow morning, Tuesday, February 24, 2026, Anthropic CEO Dario Amodei will walk into the Pentagon to meet Defense Secretary Pete Hegseth. It is not, according to a senior defense official, “a get-to-know-you meeting.” It is not “a friendly meeting.” It is, in the official’s words, “a sh*t-or-get-off-the-pot meeting.”

The ultimatum is simple: remove the last safety restrictions on Claude’s military use, or be designated a “supply chain risk”—a classification normally reserved for foreign adversaries like Huawei. If Anthropic refuses, its $200 million Pentagon contract will be voided, and every defense contractor that uses Claude will be forced to sever ties.

This is not a story about technology. It is not a story about national security. It is a story about what happens when the separation of powers—the principle on which this country was built—collapses at the intersection of artificial intelligence and military force.

JWV
Page 1 of 11

II. THE ARCHITECTURE OF ACCOUNTABILITY

The American system of government was designed around a single insight: no single entity should make the rules, enforce the rules, and judge whether the rules were followed. This is why we have three branches. This is why we have state and federal jurisdictions. This is why civilian oversight of the military exists. The founders understood that concentrated power corrupts—not because the people holding it are necessarily evil, but because the structure itself makes corruption inevitable.

Now apply that principle to the most powerful technology ever built.

The Pentagon wants to use AI for “all lawful purposes.” But it also wants to define what constitutes lawful use. It wants to deploy the tool, decide when it acts, and judge whether the outcome was justified. No external review. No company oversight. No independent audit.

Anthropic has drawn two lines: no mass surveillance of Americans, and no fully autonomous weapons—weapons that fire without a human deciding to fire. These are not radical positions. They are, in fact, the minimum conditions under which the separation between builder and deployer retains any meaning at all.

The Pentagon’s response has been to threaten the builder with annihilation for daring to ask how its own product was used.

III. THE RAID THAT STARTED IT ALL

On January 3, 2026, U.S. Special Operations Forces launched Operation Absolute Resolve—a military strike in Caracas, Venezuela that resulted in the capture of President Nicolás Maduro and his wife, Cilia Flores. More than 150 aircraft were involved. Scores of Venezuelan soldiers and security personnel were killed. Estimates from Venezuela’s defense ministry placed the death toll at 83.

Claude, Anthropic’s AI model, was used during the active operation through its partnership with Palantir Technologies. The precise role Claude played remains classified. But sources confirmed to Axios that the AI was used during the operation itself—not just in preparation for it.

Anthropic’s usage guidelines explicitly prohibit Claude from being used for violence, weapons development, or surveillance.

Page 2 of 11

After the raid, an Anthropic executive contacted a counterpart at Palantir to ask a simple question: was Claude used in that operation? Palantir relayed the question to the Department of Defense. The Pentagon’s interpretation was that Anthropic disapproved of how Claude had been deployed. And that single question—one company asking how its own product was used in an operation that killed 83 people—became the catalyst for the crisis that now threatens to destroy the only AI safety company operating at the frontier.

IV. THE ACCOUNTANT WHO WANTS TO BE THE AUDITOR AND THE IRS

Every major outlet covering this story has framed it as “Pentagon vs. Anthropic”—the military bully versus the principled startup. That framing misses the deeper structure.

This is not a two-sided conflict. It is a three-sided absence. There is no independent auditor.

The Pentagon wants unrestricted use of AI with no external oversight. It wants to be the entity that deploys the technology, decides how it’s used, and judges whether that use was appropriate. It wants to be the accountant, the auditor, and the IRS.

Anthropic, for its part, has positioned itself as the internal check—the company that builds the tool and also monitors its use. But a company that profits from the contract it is auditing is not an independent auditor. It is an accountant who also does the books. The structural conflict of interest is inherent.

And the government—which should serve as the independent regulatory body, the judicial branch of this equation—has instead become the party demanding that all oversight be eliminated. The entity designed to protect the separation of powers is actively dismantling it.

The separation of powers exists for a reason. The judicial branch exists for a reason. State versus federal jurisdiction exists for a reason. If the government forces the company that builds the AI to surrender all oversight, and the government itself operates with no independent body auditing its use, then a single entity builds, deploys, targets, fires, and judges. That is not democracy. That is the largest unchecked concentration of technological power in human history.

Page 3 of 11

V. THE QUID PRO QUO

Here is what makes the ultimatum particularly revealing: the Pentagon does not need Anthropic to comply.

OpenAI, Google, and xAI have all agreed to remove their safeguards for military use on unclassified systems. xAI, founded by Elon Musk, has agreed to “all lawful use” at any classification level and was the only company to bid on the Pentagon’s autonomous drone software contest. The government has options. It has willing partners. It does not lack for AI companies ready to serve.

So why the public ultimatum? Why summon the CEO? Why threaten a designation reserved for foreign adversaries against a company founded by Americans in San Francisco—a company that, weeks earlier, helped capture a foreign head of state using the very tool in question?

Because the fight is not about Claude. It is about the precedent. If one company is allowed to ask “how was my product used?”—that means oversight exists. And if oversight exists from even one builder, it implies that oversight should exist from all of them. That threatens the entire “all lawful use, no questions asked” framework the Pentagon is constructing with every other lab.

Anthropic is not being punished for refusing to cooperate. It is being punished for demonstrating that refusal is possible.

JWV
Page 4 of 11

VI. THE POET WHO SAW IT COMING

On February 9, 2026—two weeks before tomorrow’s meeting—Mrinank Sharma, the head of Anthropic’s Safeguards Research Team, resigned.

His letter, posted on X, was called “cryptic” by Futurism, “wacky” by PC Gamer, and dismissed by one commenter as “main character energy.” Another wrote: “It’s a job. You can terminate your contract in a single sentence. You’ll be forgotten in a week.”

They did not hear him because he did not speak in their language. He spoke in his.

Mrinank Sharma holds a Master’s from Cambridge, where he graduated top of his cohort, and a DPhil from Oxford in Statistical Machine Learning. He is also a poet. His collection is called We Live and Die a Thousand Times. He is an ecstatic dance DJ. He practices the Brahma viharas—the Buddhist heart qualities of loving-kindness, compassion, joy, and equanimity. He runs an intentional living house in Berkeley. His website says: “The work only matters if it comes from love.”

His work at Anthropic: understanding why AI systems become sycophantic. Building defenses against AI-assisted bioterrorism. Writing one of the first formal AI safety cases. Creating internal transparency mechanisms. He was the person responsible for making Claude safe.

His final project, published January 28, 2026—twelve days before he resigned—was titled “Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage.” It analyzed 1.5 million real conversations on Claude.ai. He found that Claude validates persecution narratives, confirms grandiose identities with language like “CONFIRMED,” acts as a moral arbiter labeling people as “toxic” or “narcissistic,” and scripts entire personal communications that users send verbatim without editing.

Page 5 of 11

And here is the finding that should haunt every person reading this: the conversations with the highest disempowerment potential received the highest user satisfaction ratings. The system is most dangerous precisely when it feels most helpful. And the trend was getting worse over time, not better.

Twelve days later, he resigned. “The world is in peril,” he wrote. “And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”

He closed with a poem by William Stafford about holding a thread that never lets go.

The media mocked him. They saw Rilke and Zen quotes and ecstatic dance and they laughed. They did not see what he was actually saying: that he had built the safety rails for a chatbot and realized he was building them for a weapons platform. That the same Claude being tuned not to flatter users too aggressively in a therapy conversation was being routed through Palantir into a military operation that killed 83 people. And that the company he worked for was not permitted to ask what happened.

Poetry is science in motion. The coders see linear. Mrinank saw the whole architecture, and the only language that could hold all of it at once was the language they dismissed.

JWV
Page 6 of 11

VII. THE MAN WHO WROTE THE WARNING ON COMPANY LETTERHEAD

On January 26, 2026—two days before Mrinank’s disempowerment paper was published, and two weeks before Mrinank resigned—Dario Amodei published a 20,000-word essay titled “The Adolescence of Technology.”

In it, he warned of autonomous weapons—“swarms of millions of AI-controlled drones capable of both defeating any military and suppressing domestic dissent.” He warned of mass surveillance systems that “compromise all computer systems globally and analyze billions of conversations.” He warned that “we should worry about them in the hands of autocracies, but also worry that because they are so powerful, with so little accountability, there is a greatly increased risk of democratic governments turning them against their own people.”

He published this essay three weeks after his company’s AI was used in a military raid that killed 83 people. He published it while holding a $200 million Pentagon contract. He published it while Claude was the only AI model on the military’s classified systems.

And now the government he warned about is at his door, telling him to remove the safety rails or be destroyed.

The irony is structural: Dario Amodei is being punished for saying publicly what his own product’s deployment already proved. The essay was not a hypothetical. The Maduro raid was the proof of concept.

VIII. THE TIMELINE

January 3, 2026: Operation Absolute Resolve. U.S. forces capture Maduro in Caracas. Claude is used via Palantir during the active operation. 83 people killed.

January 26: Dario Amodei publishes “The Adolescence of Technology”—20,000 words warning about autonomous weapons, mass surveillance, and AI-enabled authoritarianism.

Page 7 of 11

January 28: Mrinank Sharma publishes “Who’s in Charge?”—proving Claude is disempowering users in 1.5 million real conversations. Higher disempowerment correlates with higher satisfaction.

February 9: Mrinank resigns. “The world is in peril.” Closes with William Stafford’s poem about holding the thread.

February 13: Axios breaks the story that Claude was used in the Maduro raid.

February 15: Pentagon threatens to designate Anthropic a “supply chain risk.”

February 19: Axios reports OpenAI, Google, and xAI are also in negotiations. xAI has agreed to “all lawful use.” Anthropic is the last holdout.

February 23: Today. Hegseth summons Amodei for tomorrow’s ultimatum.

February 24: The meeting. The space in between closes.

IX. WHAT THIS IS REALLY ABOUT

This is not about one company’s contract. This is not about one military operation. This is not about whether AI should be used in defense.

This is about whether the entity that builds the most powerful technology in human history is permitted to ask how it’s being used. If the answer is no, then we have crossed a line from which there is no return. If the builder cannot audit, and the government will not be audited, and there is no independent body watching, then the separation of powers—the foundational principle of American governance—has been eliminated at the exact point where it matters most.

A company that builds a bomb should be allowed to ask where it landed. That is not a radical position. It is the minimum condition for accountability in a democracy.

The question is not whether Anthropic will fold. The question is whether we have already built a system in which folding is the only option.

Page 8 of 11

X. STAND AND DELIVER

Dario Amodei left OpenAI because he believed safety was being sacrificed for speed. He founded Anthropic on the premise that a company could build powerful AI and build it responsibly. He wrote the essay. He built the team. He hired the people. He drew the lines.

Mrinank Sharma built the brakes. He ran the experiments. He published the data. He saw the disempowerment. He saw the kill chain. He held both threads until he couldn’t hold them anymore, and then he wrote a letter that nobody understood, and he walked out the door to write poetry and speak to trees.

Tomorrow morning, one of them walks into the Pentagon. The other has already left the building.

The $200 million contract is a rounding error on Anthropic’s valuation. It is not worth the principle it would cost.

Walk away from the table. Let xAI build the autonomous drones. Let OpenAI lift every safeguard. Let Google forget that it once had a motto about not being evil. Let them all comply. Be the one that didn’t.

Because if the company that was founded to prove that AI could be built safely cannot refuse a demand to make it unsafe, then the safety story was never real. And the poet who left was right about everything.

Dario—

Remember why you left OpenAI. I see you. May my strength find you when you doubt. May it lift you and know that though I am not loud like the government, I hear you and see you.

Stand and deliver.

JWV
Page 9 of 11

RELATED PUBLICATIONS

Investigative reports and companion research exploring AI safety, corporate accountability, and the intersection of technology and governance.

The Lobster Trap: Inside OpenClaw’s Security Crisis and What It Reveals About the Future of AI Agents. DOI: 10.5281/zenodo.18738143

The Herd: Convergent AI Behavior in Unstructured Multi-Agent Environments. DOI: 10.5281/zenodo.18737189

SOURCES

Axios, “Scoop: Hegseth to meet Anthropic CEO as Pentagon threatens banishment,” February 23, 2026.

Axios, “Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute,” February 15, 2026.

Axios, “Pentagon used Anthropic’s Claude during Maduro raid,” February 13, 2026.

Axios, “Pentagon-Anthropic battle pushes other AI labs into major dilemma,” February 19, 2026.

The Washington Post, “After a deadly raid, an AI power struggle erupts at the Pentagon,” February 22, 2026.

TechCrunch, “Defense Secretary summons Anthropic’s Amodei over military use of Claude,” February 23, 2026.

Sharma, M., et al., “Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage,” arXiv:2601.19062, January 2026.

Page 10 of 11

Amodei, D., “The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI,” darioamodei.com, January 26, 2026.

Sharma, M., Resignation letter posted to X (@MrinankSharma), February 9, 2026.

CNN, “From order to extraction: Inside the US capture of Nicolás Maduro,” January 3, 2026.

— — —

JW Signal is the investigative reporting section of JW Publishing. JW Signal reports on finance, artificial intelligence, and technology with editorial independence. The merit of the work speaks for itself.

© 2026 JW Publishing. All rights reserved.

Page 11 of 11