A new A.I. influencer is producing some of the most criminal charts I've ever seen
Who is “Leopold Aschenbrenner"? PLUS: Bill Ackman, Vivek Ramaswamy, and the anti-woke finance grift.
Greetings from Read Max HQ! In today’s newsletter:
Who is “Leopold Aschenbrenner,” and why is he suddenly a top A.I. influencer?; and
A short history of the anti-woke finance grift, starring Bill Ackman and Vivek Ramaswamy.
If you prefer to listen to Read Max rather than read it, a recording of me reading this newsletter will be available tomorrow morning here.
A reminder: Read Max is a reader-funded newsletter. I am able to write independent criticism of the tech industry, as well as weird little bullshit blog posts about bumper stickers and cigarette smoking, because enough readers subscribe that I can treat this entire endeavor as a full-time job. Without the support of readers, I
If you get $5 of enjoyment out of Read Max every month--if you’d buy me a Big Mac (depending on the state) because you feel like you’ve learned something or changed your mind due to this newsletter--please consider becoming a paying subscriber.
Who is Leopold Aschenbrenner? An annoying new A.I. hype guy to know about
Among the many useful features of this newsletter (cheap at $5/month or $50/year) is that it serves as an “early warning” system, alerting readers of New Guys who are highly likely to become annoying fixtures of discourse in tech, media, culture, and other adjacent domains. Awareness of these Potential Characters can be put to many different purposes, including demonstrating knowledge dominance in the group chat, or simply improving your day-to-day quality of life by immediately blocking and muting the names of the New Guys on any and all social media platforms.
This week’s new guy is a 20-something former OpenAI employee with the improbably Robert Musil name of Leopold Aschenbrenner, who hard-launched his presence in the A.I. discourse space this week with
a 165-page PDF report/manifesto called “Situational Awareness: The Decade Ahead” (“AGI by 2027 is strikingly plausible… AI progress won’t stop at human-level… The nation’s leading AI labs [are] basically handing the key secrets for AGI to the CCP on a silver platter… In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers?”);
an appearance on the popular tech and A.I. podcast Dwarkesh Podcast,1 where he says things like “The CCP is going to have an all-out effort to infiltrate American AI labs… 2023 was the moment for me where AGI went from being this theoretical, abstract thing [to] ‘I see it, I feel it, and I see the path. I see where it's going.’”; and
an announcement that he “has started an investment firm to back startups with capital from former Github CEO Nat Friedman, investor Daniel Gross, Stripe CEO Patrick Collison and Stripe president John Collison.”
The main thrust of this well-executed roll-out, besides “Leopold Aschenbrenner is a genius polymath A.I. expert to whom you should pay attention”: A.G.I. is coming soon, and the real existential risk is that the dastardly Chinese Communists might build it first.
And why should you listen to this guy? Part of Aschenbrenner’s pitch is his resume (maybe a better word would be “pedigree”). In 2019, at age 17, while a student at Columbia, he was awarded a grant from the libertarian economist/blogger Tyler Cowen’s “Emergent Ventures” program “to spend the next summer in the Bay Area and for general career development,” based on a paper about existential risk that had impressed Cowen.2 At age 19, he graduated as Columbia’s valedictorian; from there he went to work at the FTX-associated Future Fund, and then in A.I. safety at OpenAI, where he was apparently a close ally of eccentric former chief scientist Ilya Sutskever, a strong believer in the possibility of literal A.I. apocalypse.
Earlier this year, though, he was fired from OpenAI, allegedly for leaking. One of the revealing aspects of the Aschenbrenner Launch is his attempt to, let’s say, seize the narrative about his departure from the company:
Last year, I wrote an internal memo about OpenAI's security, which I thought was egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors. I shared this memo… with a couple of board members. Days later, it was made very clear to me that leadership was very unhappy I had shared this memo with the board. Apparently, the board hassled leadership about security.
I got an official HR warning for sharing the memo with the board. The HR person told me it was racist to worry about CCP espionage and that it was unconstructive…. when I was fired, it was very made explicit that the security memo was a major reason for my being fired. They said, "the reason this is a firing and not a warning is because of the security memo."
I have no clue what happened with Aschenbrenner at OpenAI. What I do know is that the Silicon Valley investor class has become quite contemptuous of Effective Altruism (the school of thought that drove the Future Fund), and highly skeptical and suspicious of the associated focus on existential risk or “x-risk” now that it seems to be a retardant on their ambitions. On the other hand, that same class is quite hawkish on China and bullish on national security businesses and the military-industrial complex. If I were a young and ambitious person whose career so far was largely in “A.I. safety” and other E.A.-associated fields, I might attempt to retcon my experience and interests as more national-security oriented. And if I were really trying to suck up to reactionary venture capitalists I might also imply that I was unjustly fired over unfair charges of racism by a devious H.R. drone.
While the specifics of this C.V. are credibility-building among Aschenbrenner’s target audience (investors and founders in whose companies he’d like to invest, as well as dupes on Twitter who will boost his profile), just as important is the image he fits: Young, prodigious, confident, fast-talking, able to speak fluidly on a range of subjects from geopolitics to epidemiology to chip design. If Aschenbrenner weren’t a Zoomer I’d call him a millennial ambition psycho; certainly, he shares with the Ivy League sociopaths of my generation a cloying, manic self-assurance that somehow scans as “genius” to the credulous and the powerful and as “extremely annoying bullshit” to literally anyone else.
I mean, let’s be honest: He sounds like Sam Altman, and the boost from Cowen will likely be as important to his reputation as the boost Altman received from Paul Graham, was to his, even if Graham probably wishes he’d never written such nice things about Altman. It seems too cheap and on-the-nose to point out that in their unwavering confidence and abstract relationship to the truth, large language models like GPT-4o project a kind of intelligence that most closely resembles the intelligence projected by the founders and investors and ambitious little freaks behind this generation of generative A.I. applications. But, you know, if Leopold Aschenbrenner impresses you I can see why “the talking computer that confidently bullshits in order to tell you what you want to hear” would also impress you.
Because what is he really peddling here? Here’s a tweet showing off one of the graphs from his PDF:
This is a well-designed graph, in that it’s so annoyingly fake and disingenuous, in so many ways and on so many levels, that it pre-emptively exhausts critics. You want to begin by pointing out that it’s misleading to present a chart showing exponential growth and say “it just requires believing in straight lines on a graph”--but you would have to then explain what, precisely, is growing exponentially, and unfortunately nothing on this graph--none of the words, none of the axes, none of the lines--bears even an indirect relationship to an observable, measurable, or quantifiable phenomenon.3 What is “effective compute”? What is “Smart High Schooler” and how do we know that’s equivalent to, or the best way to describe, GPT-4? Here are three charts that bear a more honest relationship to reality than Aschenbrenner’s:
In some sense this is all ignorably stupid stuff from a 24-year-old who desperately needs to spend a decade having a normal job and a normal life around normal people who are not Like Him. But the addition of China paranoia to the now-standard A.I. messianism he’s espousing is troubling. Aschenbrenner is distinctly not an expert here, even if he dissembles as one--“You need to stay ahead of the curve of how AGI-pilled the CCP is,” he says on the Dwarkesh Podcast, only to admit, an hour or so later, “I'm really uncertain on how seriously China is taking AGI right now”--but his clear skill at reading his audience suggests that there’s a strong appetite for this kind of red meat among a fraction of the Silicon Valley investment class.
It’s been more than a year since GPT-4 was released, which means we’re already behind Aschenbrenner’s exponential schedule. Progress has been incremental, and the flaws and shortcomings of LLMs have become much more obvious. Insisting on an “A.G.I. race” with China provides a much-needed injection of urgency and energy, propping up an A.I. hype cycle already missing Aschenbrenner’s ambitious “effective compute” A.G.I. curve--sorry, ”straight line.”
One more note: I don’t have much a marketing team or P.R. operation. The best way for me to find new readers is for current readers to tell their friends. Please share Read Max where you can, and especially, forward the newsletter to anyone you think would enjoy it!
Conservative politics as a meme-stock long con
Here’s an interesting Wall Street Journal story about the hedge-fund manager Bill Ackman making a plan to take his firm Pershing Square public “to capitalize on his social-media fame.” The first step is the imminent sale of a part of his stake in a deal that values Pershing Square at $10.5 billion, which is a rather large number for a firm that manages $16.3 billion; as the Journal points out, neutrally, “other asset managers with valuations in the same ballpark manage several times that.” But Pershing Square has something that those other firms don’t have: a C.E.O. who posts a lot on Twitter.
His “brand-name profile and broad retail following, along with a substantial media following, will drive substantial investor interest,” the prospectus said. One of the people familiar with the matter said Ackman plans to write about new investments on X once the retail fund gets approved. (He is currently barred from marketing his Europe-listed fund to U.S. investors.)
What is the nature of this “social-media fame” and “substantial media following? Ackman, as Read Max has previously covered, has recently become a prominent and enthusiastic anti-woke crusader on Twitter, where he helped lead the campaign that ultimately forced Harvard President Claudine Gay to resign. (An activist campaign he compared, in Reeves Wiedeman’s excellent New York magazine profile, to the activist investing by which he made his name.)
He’s far from the only anti-woke thought leader to be trading on his media presence through the stock market. There’s Donald Trump’s “Trump Media,” parent company of his social-media app Truth Social, whose shares have tended to trade at much higher prices than you might expect. (Trump Media shares are down following Trump’s conviction on 34 felony counts.) There’s Elon Musk and his various business ventures, though admittedly the whole deal with Tesla dead-enders is a little more complicated. And there’s erstwhile Republican primary candidate and certified millennial ambition psycho Vivek Ramaswamy, whose anti-woke investment company4 recently bought up 7.7 percent of Buzzfeed for $6.81 million, sending the stock briefly surging 20 percent, presumably because certain retail investors liked his very specific suggestions for the company:
Mr. Ramaswamy has some ideas for ways BuzzFeed can jump-start its business. In his letter, the former Republican presidential candidate suggested hiring high-profile “creators” like Mr. Carlson, Mr. Barkley and Aaron Rodgers, the National Football League quarterback. Mr. Carlson and Mr. Rodgers are popular in right-wing media circles for their advocacy of conservative causes.
Elsewhere in his letter, Mr. Ramaswamy criticized BuzzFeed’s 2017 decision to publish a dossier of unverified information that asserted there was a connection between former President Donald J. Trump and Russia; much of that document, the so-called Steele Dossier, was subsequently discredited.
One way of thinking about this “trend” of conservative stars leveraging their media profiles and anti-woke reputations to raise money from (and/or cash out off the largesse of) politically aligned retail investors is as a symptom of a general collapse of whatever nominal barriers once might have existed between finance, politics, and entertainment. “Investor,” “influencer,” “politician” become interchangeable words; “NYSE,” “Twitter,” “elections” interchangeable arenas.
But a maybe more straightforward way of thinking about it is as the next evolution of a longstanding feature of conservative politics: the grift. It’s now been 12 years since Rick Perlstein wrote his classic Baffler essay “The Long Con,” on the preponderance of schemes, scams, frauds, and cons in the conservative movement, and not only does the piece hold up, it provides a useful framework for understanding Ackman, Ramaswamy, Musk, and Trump:
And yet this stuff is as important to understanding the conservative ascendancy as are the internecine organizational and ideological struggles that make up its official history—if not, indeed, more so. The strategic alliance of snake-oil vendors and conservative true believers points up evidence of another successful long march, of tactics designed to corral fleeceable multitudes all in one place—and the formation of a cast of mind that makes it hard for either them or us to discern where the ideological con ended and the money con began. […]
[I]t’s not really useful, or possible, to specify a break point where the money game ends and the ideological one begins. They are two facets of the same coin—where the con selling 23-cent miracle cures for heart disease inches inexorably into the one selling miniscule marginal tax rates as the miracle cure for the nation itself. The proof is in the pitches—the come-ons in which the ideological and the transactional share the exact same vocabulary, moral claims, and cast of heroes and villains.
What Ackman is doing here is perhaps slightly more subtle than Ramaswamy’s anti-activist activist investing, or Trump’s extremely blunt financial instrument. But as Perlstein details, regardless of the actual pitch, the key first step for this kind of direct appeal is assembling a list:
Following the Goldwater defeat, Viguerie went into business for himself. He famously visited the Clerk of the House of Representatives, where the identities of those who donated fifty dollars or more to a presidential campaign then by law reposed. First alone, and then with a small army of “Kelly Girls” (as he put it to me in 1996), he started copying down the names and addresses in longhand until some nervous bureaucrat told him to cease and desist.
By then, though, it was too late: Viguerie had captured some 12,500 addresses of the most ardent right-wingers in the nation. “And that list,” he wrote in his 2004 book, America’s Right Turn: How Conservatives Used New and Alternative Media to Take Over America, “was my treasure trove, as good as the gold bricks deposited at Fort Knox, as I started The Viguerie Company and began raising money for conservative clients.”
The mailing addresses of 12,000 of some of the country’s stupidest right-wingers is a pretty good start if you want to make some money. But so is 1.2 million Twitter followers comprised of some of the country’s stupidest right-wingers.
Dwarkesh Podcast is to Lex Fridman Podcast what Lex Fridman Podcast is to The Joe Rogan Experience.
“Why didn’t I ultimately pursue econ academia?” Aschenbrenner says on Dwarkesh Podcast. “There were several reasons, one of them being Tyler Cowen. He took me aside and said, ‘I think you’re one of the top young economists I’ve ever met, but you should probably not go to grad school.’… he kind of introduced me to the Twitter weirdos. I think the takeaway from that was that I have to move out west one more time”
Even the figures grounded in some kind of semi-reproducible measurement are misleading, e.g. this chart showing GPT-4’s performance on a variety of standardized tests:
“Performance on exams” is a reasonable measurement of certain kinds of knowledge and aptitude in humans, but is it reasonable for LLMs? As Melanie Mitchell wrote about similar hype around GPT-3’s ability to outperform humans at common exams:
For example, when a human succeeds in answering a test question such as the example on inventory given above, we assume that the human can then generalize this understanding to similar situations—the point of the test, after all, is to evaluate knowledge and skills that go beyond the wording of any specific question. But is the same true for ChatGPT? […] The fact that ChatGPT does well on one version of a problem does not mean that it has a humanlike understanding of the problem or that it will be able to solve similar (or even essentially identical) problems. Probing the system’s understanding requires much more than giving it a single version of a question.
It’s extremely impressive to score in the 88th percentile on the LSAT. But as an example of Mitchell’s point, one expects that anyone scoring in the 88th percentile on the LSAT--and the vast majority of the people scoring below it as well!--would also be able to answer this question correctly on the first try:
Either way, it’s already been as long since GPT-4 was launched as it was between GPT-4 and GPT-3.5, so we’re already falling behind the pace of progress (at scoring even higher on the LSAT) promised by Aschenbrenner’s figures.
From Bloomberg: “Strive Asset Management is an anti-activism fund, which has pushed companies to stay out of “woke” politics and has pressured about a dozen companies to end compensation incentives for environmental and social goals.”
Thank you for this important contribution to Type of Guy Studies! The Cowen connection is super interesting-- I've got a piece I haven't finished yet on the relationship between right-wing economics and AI, and think this will be a useful example. Cowen comes out of a right-wing economic lineage that is very susceptible to AI hype because of how it sees brains and the world. AI, as a black box with its throw-it-all-in-the-cauldron training, is easy for them to see as representing a kind of market-based "spontaneous order" (very Hayek) over deliberate "planning" of traditional computing. In this view it makes sense that AI would lead to superhuman AGI, which would naturally prove the superiority of market competition over deliberate lefty cooperation. The red-baiting is a nice touch here, and really brings some of these ideas full circle. (I think all of this also explains the shift away from EA you mention here... but I should really finish the piece instead of writing it in someone else's comment section)
Will be interesting to see how they react when AGI doesn't pan out...
Love this: "a cloying, manic self-assurance that somehow scans as “genius” to the credulous and the powerful and as 'extremely annoying bullshit' to literally anyone else." I taught and did research on neural networks right around when "deep learning" took off, and listening to consultants--let alone these "personalities"--hype AI at my company makes me want to lose my fucking mind. I can't even get anyone to look at a regression line and we're somehow going to solve all our problems with AI.