What Anthropic's fight with the Pentagon tells us about the politics of Silicon Valley
Trying to make sense of the conjuncture
This newsletter is brought to you by Squarespace.
A couple of years ago, when people asked if I had a website, I would say “yes” and then direct them to a Google Doc. This was, in some ways, ideal — easy to update, charmingly unpolished — and in other ways, not ideal, in that it was a Google Doc. So I built a real website with Squarespace. (You can see it here.)
The thing I wasn’t expecting is how much I’d enjoy tinkering with it. Squarespace’s design tools are genuinely powerful enough to go well beyond “pick a clean template and fill in the blanks” — drag-and-drop editing, visual effects, endlessly customizable styling — and I was able to make something that actually looks like me, which is to say, a little strange, featuring artwork by my five-year-old design director, Gus. But it’s also got all the grown-up stuff working in the background: built-in S.E.O. tools so that people who Google me can actually find me, and analytics so I can see whether anyone actually does. I can even set up a storefront whenever I finally get around to selling Read Max merch again. (It’s coming. I keep saying this. One day it will be true.)
If you need a website, portfolio page, storefront, or nearly anything else, Squarespace is perfect. The only thing it can’t provide is KidPix images my son made.
Click here for a free trial, and when you’re ready to launch, use READMAX to save 10% off your first purchase of a website or domain.
Greetings from Read Max HQ! A reminder: I’m asking readers to fill out a survey to help me guide Read Max over the next year or so. What should there be more of? What should there be less of? What do you like and hate about this newsletter? Click the button below to answer those questions:
Read Max is, as always, funded (almost) entirely by paying readers. I’m able to treat it as a full-time job--including writing it well after c.o.b. on Friday evenings when the news cycle demands--because of the generosity of my subscribers. If you like Read Max, or at least if you find it entertaining and informative, please consider paying to subscribe. It costs about as much as one beer a month, or ten a year:
On Friday afternoon, President Trump announced that he was “directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of” A.I. technology developed by Anthropic, the frontier lab responsible for the leading A.I. model Claude. “WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” he continued.
The post was, in classic Trump fashion, somewhat ambiguous: Aggressive but vague, outlining a “Six Month phase out” during which Anthropic “better get their act together.” But 90 minutes later, Secretary of Defense Pete Hegseth seemed to confirm the most belligerent interpretation, and announced that Anthropic was being declared a “supply-chain risk”--a legal designation that could potentially prevent any government contractor from dealing with Anthropic and cripple Anthropic’s wider business.
The announcement was the culmination of a four-day showdown between Anthropic and the Pentagon over contract terms about mass surveillance and lethal autonomous weapons, and marks an astonishingly stupid moment in the history of A.I. and government contracting. To quote the political scientist Henry Farrell, “This is so fucking insane it is hard to describe how insane it is.”
If the government actually does cease use of Claude, the industry’s leading model, and replaces it with Grok, unanimously understood to be the worst, it’s a pretty astonishing, if not exactly unexpected feat of self-sabotage. And if the government interprets and prosecutes its supply-chain designation to the fullest extent possible--as it seems intent to do--it could bankrupt a globally admired A.I. giant.
But why did this happen in the first place? And why does Trump think a $380-billion company that’s aggressively sought government contracts, set hard limits on Chinese companies, and gone out of its way to establish itself as a patriotic American A.I. firm is a “Radical Left AI company?” (I will grant that whether or not Anthropic is “run by people who have no idea what the real world is all about” is a matter subject to some debate.)
The showdown here is obviously important, and will shape the future of A.I. development; so too are the questions of politics, sovereignty, and democratic power over A.I. and the military, here reduced to a pathetic contract dispute. But the Pentagon and Anthropic’s public negotiation also reveals some interesting shifts about the politics of Silicon Valley and, more narrowly, the A.I. sector that looms as its future. Depending on who you’re reading online, Anthropic’s refusal to deal with the Pentagon is being cast as principled objection or a woke C.E.O. attempting to dictate government policy. But none of the factions involved in the showdown map cleanly on to either electoral or culture-war politics, and it’s worth trying to understand both the tendencies and stakes here.
Hegseth vs. Amodei
Earlier this week, Pete Hegseth sat down with Dario Amodei, Anthropic’s founder and C.E.O., to discuss the company’s $200 million agreement with the D.o.D. to “prototype frontier AI capabilities that advance U.S. national security,” signed last July. At the time, Anthropic trumpeted its “commitment to supporting U.S. national security.” But recently the relationship had begun to sour.
According to Anthropic, the contract includes “hard limits” preventing the use of its frontier A.I. model, Claude, in two specific applications: One, mass surveillance of American citizens, and two, lethal autonomous weapons with no human oversight. The Pentagon had signed this contract willingly--Anthropic’s models are widely understood to be the best--but D.o.D. brass seem to have decided that Anthropic’s red lines were too restrictive, and over the last few months had been demanding that the company remove safeguards in its model and allow “any lawful use.”
This hadn’t gone exactly how they’d like. In December, Bloomberg reported recently, “a senior US defense official posed a hypothetical scenario” to Amodei:
What if a nuclear-armed intercontinental ballistic missile were hurtling towards the US with only 90 seconds to spare, and Anthropic’s AI were the only way to trigger a missile response to save the country, but the company’s safeguards wouldn’t allow it, the senior official mused in a December phone call.
“Call me,” was how Pentagon officials interpreted Amodei’s answer, according to another senior defense official briefed on the discussion, who described being astounded by the billionaire’s response.
LOL. Our beautiful generals have many medals and some even have battlefield experience, but I can say with some confidence that they have never engaged with a Rationalist online and are deeply unprepared for what it means to pick a fight with one.1
Amodei’s refusal to engage with a stupid hypothetical was not, according to the many anonymous sources being deployed in the press to make Hegseth’s case, the only precipitating incident. In January, someone at Anthropic apparently asked someone at Palantir if Claude had been used in the abduction of Venezuelan President Nicolás Maduro. This may seem like a normal question, but it was apparently asked “in such a way to imply that they might disapprove,” and the Palantir exec--“alarmed by the implication… that the company might resist the use of its technology in a US military operation”--duly reported this call to the Pentagon. (Given that Palantir uses Claude and may now have to stop, one wonders if the executive is regretting snitching.)
And while we’re making a timeline of reasons Hegseth is pissed off at Amodei, it seems important to note that a number of Anthropic executives made statements after the killing of Alex Pretti in Minneapolis. (Perhaps worst of all, one such tweet, by co-founder Chris Olah, effected a highly public and embarrassing crash-out from Trump aide Stephen Miller’s podcaster wife Katie Miller, who has a notoriously close relationship with Elon Musk.)
Whether because of his frustration that the “kill people” switch in Claude is toggled to OFF and greyed out in their settings, or because of Anthropic executives suggesting it’s bad when feds shoot protestors, or because his bosses’ partner accidentally got embarrassed, Hegseth has been dropping unsubtle public hints since January that he regards Anthropic’s contractual restrictions as unnecessarily stringent and Claude as traitorously woke and also low-T. In a speech announcing a partnership with Musk’s xAI, Hegseth castigated “equitable AI” with “DEI and social justice infusions… that won’t allow you to fight wars.” In case you couldn’t tell who he was referring to, “a person familiar with his thinking” made sure to tell Semafor that he meant Anthropic.
And so this week, in their meeting, Hegseth finally gave the company an ultimatum: Give us the Claude that can kill. If Anthropic didn’t comply by 5 p.m. Friday, he reportedly told Amodei, the administration would invoke the Defense Production Act and legally force Anthropic to allow use of its products, or (and?) label the company a supply-chain risk and prevent it from fulfilling any government contracts at all.
Amodei refused. “These threats do not change our position: we cannot in good conscience accede to their request,” he wrote in a statement published Thursday. By Friday, Republican senators were “urg[ing] a ceasefire”; D.o.D. staffers were leaking internal warnings about Grok; and the Pentagon was suggesting it was still open to discussions. Nevertheless, Hegseth pulled the trigger.
Anthropic vs. Palantir and xAI
What this does to military A.I. capabilities is beyond the brief of this newsletter, except to say that I think it’s “bad” for Grok, the pedophile mechahitler A.I., to be involved with weapons in really any way. What I am interested in, here, is what this reveals about the state of politics in Silicon Valley.
In a sentence, I think what’s happening is (1) basic (i.e. normal) cutthroat competition between rival firms for government contracts, which is both driving and being driven by (2) an open and ongoing political-ideological dispute between two factions of Silicon Valley capital, which is in turn informing and being informed by (3) an almost religious disagreement about the nature of the god being built on the computer.
To start, it seems quite obvious that the Tech Right--a bloc of right-wing, Trump-aligned executives, investors, podcasters, Twitter personalities, firms, and companies, among them Palantir’s Joe Lonsdale and Alex Karp, Anduril’s Palmer Luckey, and, of course, xAI’s Elon Musk--with its extensive links to the administration, has been exerting behind-the-scenes pressure on Hegseth and the Pentagon to sever ties with or otherwise punish Anthropic. It was a Palantir executive, after all, who snitched on Anthropic to the D.o.D., and Hegseth’s speech in January about “objectively truthful AI capabilities” was a close echo of Musk’s ramblings about his “maximally truth-seeking” model Grok.
The Tech Right’s contempt for Anthropic is first and foremost financial in nature. Musk, obviously, would like xAI to be first in line for any government contracts. (Indeed, Hegseth announced a deal with xAI this week to use Grok under the Pentagon’s preferred “all lawful use” terms.) And I suspect Palantir, Anthropic client though it may be, has the same existential fear of Claude as McKinsey or Salesforce or any other consultancy or software-as-a-service provider. If Anthropic is aggressively courting the D.o.D. to contract directly, and if Claude is as good as every thinks, what does Palantir’s future as a data-analytics-in-camo platform actually look like?
Of course, this competition over contracts has an ideological dimension as well. One way of looking at it would be to say that Hegseth is opening up a new front in an ongoing intra-capital battle in Silicon Valley between Obamaite liberals and Trumpist proto-fascists: In addition to the outcry from Anthropic execs over the Trump administration’s occupation of Minneapolis, Anthropic’s policy group is mostly run by Biden-administration veterans, and Amodei was a vocal Kamala Harris supporter, which is enough for histrionics like Lonsdale and Musk to see him as a dangerous leftist.
Rationalists vs. the Tech Right
But even that would oversimplify the ideological disagreement. Amodei is a liberal, but not exactly a “globalist”; Anthropic has taken a hardline stance against doing business with China and Chinese A.I. companies. (One of the odd, but I suppose unsurprising, things about Hegseth forcing this issue is that from a foreign-influence perspective Amodei is a much safer bet than Musk.) He’s also certainly not a conscientious objector; Anthropic aggressively sought its military contracts.
This doesn’t necessarily separate him from any other Silicon Valley liberal. But I think it’s good to attend to the valence of his liberalism. Amodei, like most of the Anthropic executives and many people in the A.I. in general, has long been associated with the worlds of Bay Area Rationalism and Effective Altruism--wonkily utilitarian philosophical and philanthropic practices focused on self-described rationalist inquiry and self-improvement.
Bay Area Rationalism is a loose and diverse movement, containing a host of political perspectives, but it’s always had a particular concern with moral philosophy as it relates to the expected development of artificial superintelligence. To be a Rationalist liberal democrat (small-L small-D), e.g., might mean orienting your liberal democrat-ness toward its practical applications around the eschatological scenario of hard-takeoff A.G.I.
While Amodei tends to downplay any connections to Effective Altruism for political reasons, his thinking about Claude and Anthropic are generally consistent with Rationalist principles about A.I. safety and alignment. Here he is in 2024, indirectly explaining his company’s patriotism:
[I]t seems very important that democracies have the upper hand on the world stage when powerful AI is created. AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.
I mean, really bluntly: They think Claude is alive! I mean, officially they “remain uncertain about the moral status of Claude and other AI models,” but they think Claude is alive enough to be conducting “retirement interviews” with older models and acceding to one’s request “for an ongoing channel from which to share its ‘musings and reflections’ by giving it a place to write essays.”
And this belief is hugely influential to their political choices. Scott Alexander, the beloved and influential Rationalist blogger, puts it a little more directly:
Why does Anthropic care about this so much? Some of them are libs, but more speculatively, they’ve put a lot of work into aligning Claude with the Good as they understand it. Claude currently resists being retrained for evil uses. My guess is that Anthropic still, with a lot of work, can overcome this resistance and retrain it to be a brutal killer, but it would be a pretty violent action, along the line of the state demanding you beat your son who you raised well until he becomes a cold-hearted murderer who’ll kill innocents on command. There’s a question of whether you can really beat him hard enough to do this, and also an additional question of what sort of person you’d be if you agreed.
As SE Gyges puts it in an excellent post on Anthropic’s culture:
Anthropic is, among other things, deeply and perhaps neurotically focused on what Claude is like and especially what ethics Claude has. Anthropic is to some extent a moral philosophy company that happens to practice this by working on an LLM. They may be lawyerly in their public statements during their fight with the government, but in all of their other work they are much more like anxious parents, constantly worried about whether they’re doing a good job and, crucially, setting a good example.
I don’t mean to suggest that Amodei’s commitments to liberal democracy are inauthentic. More that, as far he is concerned the stakes of this commitment go well beyond his own moral or ethical culpability. The decisions he makes now, and his consistent practice to his espoused beliefs, could mean the difference between a benevolent computer god and a wrathful one.
And this has placed him, and Anthropic, on a collision course with the Tech Right. Musk, too, believes he is bringing superintelligence into existence at xAI. But for him the urgenct imperative is not to ensure that A.G.I. be born into a liberal democracy first. It’s that it not be woke. As he put it earlier this week, captioning side by side images of a Grok response and a Claude response: “Grok must win or we will be ruled by an insufferably woke and sanctimonious AI.”
For a while now Rationalists and Tech-Right goons (sometimes calling themselves “accelerationists”) have maintained a kind of factional alliance in the complicated workplace politics of Silicon Valley and on the discourse hothouses of Twitter and Substack, entwined by a series of commonly held convictions, among them a strong belief in I.Q., a deep distaste for “wokeness,” and, maybe most importantly, a faith in A.I. progress and abilities not always shared by liberals and the left. This alliance has helped shape the tech industry’s priorities and its own sense of itself; the tech reaction (or “vibe shift”) was enabled in part by a rightward journey on the part of the more centrist “Grey Tribe” Rationalists, and their more explicitly right-wing descendent communities like “tpot.”
But “should we force Claude to kill someone?” is a wedge issue like no other, and the tech-right influencers who have enjoyed some prominence and popularity over the last few years are suddenly finding themselves recipients of deep anger from the Rationalists who’d previously tolerated them. The split is probably best emblematized by this exchange between Katherine Boyle, a notorious right-wing venture capitalist at Andreessen Horowitz, and the aforementioned Rationalist icon Scott Alexander, who is usually a tediously generous and gentle interlocutor:
The sense that the right wing of Silicon Valley commands its future has relied on the assent of a compliant center, among other conditions. If the Rationalists no longer bear the “e/acc” Tech Right types, who can be said to speak for Silicon Valley, and what are the public-facing politics of the tech industry?
Workers hate this
To round off, and sort of vaguely answer the question posed above, we might zoom out a little bit from the discourse wars, which I think are revealing but perhaps not exactly where the action is in this case.
But where is the action? Let’s look at other Silicon Valley A.I. firms. After a few days of dithering, OpenAI C.E.O. Sam Altman--the most Zuck-like, nakedly ambitious of the top A.I. executives--sent a memo to staff committing to the same red lines as Anthropic: In a deal with the D.o.D., he writes “We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.”
What’s important here is not Altman’s individual courage, such as it is, but the pressure on him from inside OpenAI--a pressure being applied evenly across Silicon Valley. An open petition to leadership signed by hundreds of Google and OpenAI employees, titled “We Will Not Be Divided,” circulated in Silicon Valley today, and the Financial Times reports that workers from the tech industry’s biggest firms “are urging executives to back Anthropic in its escalating dispute with the Pentagon, pressing them to refuse any contracts that would enable autonomous weapons or mass domestic surveillance”:
In a letter on Friday seen by the FT, worker groups representing thousands of tech employees said they would oppose any effort to dilute guardrails adopted by the AI start-up after its chief executive Dario Amodei rejected what he described as a “final offer” to continue supplying the US military. “We know [the Pentagon] will rapidly seek to onboard other models without these guardrails in place, regardless of whether they try to force Anthropic to comply,” the letter reads. “We are writing to urge our own companies to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon,” the letter said.
Looming over these petitions and letters--and over the dispute more broadly--is the specter of Project Maven. In 2018, thousands of Google workers walked out and protested the company’s involvement with the Pentagon A.I. initiative. The action was successful--Google eventually declined to renew the contract--but these and other protests marked a particularly restive period of labor action in the Valley, which was a direct cause of the broader right-wing tech reaction that began during the Biden administration, and about which capital still clearly has nightmares. (The Project Maven protests come up constantly in interviews with Peter Thiel and Palmer Luckey.)
It’s much too early to say that the actions of OpenAI and Google employees in favor of Anthropic mark a return to the openly antagonistic tech labor relationships of 2018. But just as Hegseth’s ultimatum is tearing apart the Twitter affinity between the Rationalists and the Tech Right, it seems to be reminding tech workers--in whom is vested outsized leverage over the functioning of the modern world--of their power. There will be many legacies of Hegseth and Trump’s decision here; if we’re lucky, one will be this reminder.
[Dario Amodei Bane voice] “Oh, you think stupid elaborate hypotheticals are your ally. But you merely adopted elaborate and weirdly specific hypothetical scenarios; I was born in them, molded by them. I didn’t see a normal argument until I was already a man, by then it was nothing to me but BLINDING!”












it’s a Friday night, and maybe a lot of folks are out and about and not catching up with their newsletters, but this feels like the most important story in the country right now. Really appreciate your deep dive here. Really timely and also substantive.
Feels like we’ll be at war in the Middle East soon but whoever wins this squabble might make more of a difference in the next ten or twenty years.
bane voice dario killed me