If I’m being honest, my immediate reaction to the news that Sam Altman had been fired as C.E.O. of OpenAI was something like … relief? Not, to be clear, because of anything having to do with Altman or the company in the specific, but because it meant that I would have something to write about in this newsletter for several weeks. Not to fulfill every tech-industry stereotype about journalists here, but institutional drama and chaos have generally been quite good for Read Max Industries: Last year, at around this same time, the simultaneous collapse of FTX and takeover of Twitter by Elon Musk gave me material for nearly three months’ worth of columns, and brought in several hundred new paying subscribers. Here was my 2023 equivalent: an apparent story of dysfunction, delusion, and boardroom coups, at the hottest company in Silicon Valley. Or so I hoped.
But as I quickly learned--as I basically knew already, based on the declining interest in newsletters about “A.I.”--people outside the tech industry and its subsidiary sectors like newsmedia don’t actually care that much (or know that much, or think that much) about OpenAI or Sam Altman. “A.I.” is still interesting as a topic of vague conversation, but its specific purveyors and the characters in its universe have not really broken through to the average consumer of news. And why should they? I think it’s easy for journalists and programmers (two categories of worker whose skillsets can be approximated, if still quite poorly, by ChatGPT) to get a little carried away, and assume their own obsession with recent breakthroughs in A.I., and their own assumptions about its importance, are matched by the general public. But as John Herrman wrote at Intelligencer last week, these are new investment vehicles developing speculative new technologies; for all the fascinating complexity of large language models, unless you’re an early adopter eager to experiment--or unless the apps are able to do passably well the tasks you do for a living--ChatGPT and Dall-e are functionally parlor tricks.
Nevertheless! As dismissive as I might sometimes feel about the extensive and breathless over-coverage of the boardroom drama at OpenAI1--and as funny as it ultimately is that Altman was simply return to his seat with nothing changed but the makeup of the board--the shuffle does, I think, have external-world ramifications, that may in fact touch the lives of people who are not, say, on Twitter. For one, probably a lot of people out there have some Microsoft stock in a retirement or pension account, and you may care about the OpenAI drama because your own personal financial condition depends in some small way on Microsoft’s pretty significant bet on the nonprofit’s technology as source of future revenue, in one way or another. (The good news for these people is that MSFT is up by about $10 since the day Altman was fired.)
But the aspect of this that feels more significant, if less material, is that it probably marks the (beginning of the?) end of an era in which major Silicon Valley investors and executives happily dabbled in A.I. doomerism. Doomerism, and specifically existential risk, or “x-risk”--broadly speaking the belief that machine-learning technologies have the potential to eliminate the human race--have their origins as discourses in the “rationalist” communities of the early 21st-century internet, and are not, strictly speaking, “scientific” beliefs (they might most kindly be called “speculations,” though you can probably imagine less kindly things to call them), but x-risk in particular has been adopted as a truth by a subsection of A.I. scientists and researchers, as well as by the cult-ier wings of effective altruism.
Interestingly, and somewhat uncomfortably, “x-risk” has also been taken up as a cause (or as a truth) by software-industry capital and management classes. It might seem counterintuitive to say that the technology you are funding or managing is existentially dangerous to the human race, but regardless of whether or not they believe it, “x-risk” is a useful discourse to people like Altman for a number of reasons, among them:
It’s great marketing for A.I.--if this is powerful and amazing enough to destroy all humans, think of what it can do for the B2B SAAS sector!
It’s useful rhetoric for recruiting, given that some portion of A.I. researchers believe in x-risk.2
It allows you to co-opt and subsume any specific, near-term, actionable criticism of your A.I. systems and the material effects of their deployment into much vague, long-term, and much-less-actionable fear of total apocalypse.3
It allows you to insist on (and probably dictate) incumbent-protecting regulations.
For these reasons and more, V.C.s and executives have been happy to play along with various levels of apocalyptic A.I. doomerism, dating back to even before OpenAI’s founding as a safety-oriented nonprofit. Until, of course (as I wrote last week), the doomerism gets in the way of investment returns.
Now that the tension has been made plain between between people for whom doomerism is a convenient cover story and those for whom it is, however bizarrely, a way of life, I tend to think that the medium-term effect of coup and counter-coup will be the quieting of public doomerist sentiment on the part of Altman’s cohort.4 I doubt we’ll see a full cessation of rhetoric, since doomerism can still be politically useful (see point 4 above). But I suspect that Altman would not sign this open letters about A.I. existential risk if it were being passed around today. (Indeed, his senate testimony earlier this year already focused much more specifically on “disinformation”-type concerns than “x-risk.”)
As capital and management place fewer resources--rhetorical and otherwise--behind doomerist or x-risk concerns, I expect we’ll also see more and more frequent releases of A.I. models. Already, Altman’s main contribution to A.I., besides being a relatively likeable and articulate figurehead, was to accelerate the actual release of consumer-facing applications like ChatGPT, even over some controversy and objection at OpenAI; to some extent the cat was already out of the bag on this one, and Altman’s firing only a confirmation that the camps had split.
For whatever it’s worth, in general, I think more and more frequent releases is as a positive thing, not because I’m an “accelerationist” as such--and not because I think the release or deployment of models is necessarily harmless--but because I think more people getting a chance to play with and mess around with “A.I.” will have the positive effect of demystifying it. Contrary to what you might expect, it’s much harder to be scared of apocalyptic A.I. if you’ve monkeyed around in ChatGPT, I think. And as those models are introduced and are harmful in familiar and prosaic ways rather than the dramatic and existential ways promised by “x-risk” people, I suspect doomerism in general will recede from public consciousness.
And I say this recognizing that I wrote about 6,000 words on the subject last week! But let me assure readers that I wrote all of that because I love drama, not because I think it mattered in particular.
I suspect also that researchers who don’t believe in strict x-risk might prefer companies and institutions that are x-risk-curious because “working for a cutting-edge research nonprofit dedicated to saving the world” feels better in the soul than “working for a software company generating returns for wealthy investors.”
Or, on the other hand, to tar your “A.I. safety” critics as nuts by association.
I’m not sure where else to put this besides a footnote because it’s an extremely in-the-weeds, way-too-online observation for what is, at least by the standards of this newsletter, a relatively straightforward post, but my sense is that this new recognition that Sam Altman and quasi-religious A.I. freaks and crypto converts may not have the same interests is what’s behind the sudden flowering of new factions of “e/acc” (including “e/acc-c,” “e/acc-d,” and “d/acc,” among others) in the taxonomies of overexcitable Twitter users. None of these factions are “real” outside the minds of a handful of Twitter users, but their articulation is symptomatic as it becomes clear that to Altman and other management-class heroes the profit motive is more important than the grand claims about saving humanity and ushering in the future.