The interested normie's guide to OpenAI drama
Who is Sam Altman? What is OpenAI? And what does this have to do with Joseph Gordon-Levitt's wife??
For many people, especially those working in the A.I. industry and the social and professional ecosystems that have grown out of it over the last few years, the news on Friday afternoon that Sam Altman had been fired as C.E.O. of OpenAI--and his subsequent re-instatement late Tuesday night--marked a world-historically shocking turn of events, reverberations from which will be felt for centuries--a turning point on the order of Brutus assassinating Caesar, or Lowtax exiling Moot from SomethingAwful.
But for many others, the phrase “OpenAI has fired Sam Altman” is like the phrase “Zendaya is Meechee” or “Bibme is now part of Chegg” or “He was in the Amazon with my mom, when she was researching spiders right before she died”--a collection of words that carries no emotional or semantic weight, an example sentence that exists to demonstrate syntactical rules, language that rolls off the brain like water from a duck’s back.
Indeed, these other people, the people who spend very little time thinking about “artificial intelligence” or using ChatGPT or following the deranged pronouncements of various A.I. influencers on Twitter, constitute the vast majority of humans on this planet. Perhaps you are one of them! Who is Sam Altman, exactly?? You might be wondering. (Or perhaps you are wondering nothing of the sort, having simply, and probably correctly, moved on.) Is that different from Sam Bankman?? What is OpenAI? Why did this happen, and is it important for me to care? And what does Joseph Gordon-Levitt’s wife have to do with it?
There are plenty of places you can go to ascertain some of these facts. But only Read Max, writing from its brother-in-law’s place in Atlanta, can communicate to you the vibe. Are the people involved cool? (No.) Is anything about this important? (Not really.) Is everything that has happened so far actually very funny? (Yes.)
An important caveat regarding the text that follows:
I think probably the most important thing to communicate here is that this is not, and I mean this quite seriously, an important story. It is an interesting and revealing story, and--and this is the best argument for following it--an objectively extremely funny story, but there is no particular reason that you, a presumably normal person, need to know or care or form any opinions about it, unless perhaps you need subject matter for Thanksgiving conversation.1 For a variety of reasons (anxiety, boredom, credulity) a number of news outlets2 are treating OpenAI like a pillar of the economy and Sam Altman like a leading light of the business world, but it is important to keep in mind as you read any and all coverage about this sequence of events, including this newsletter, that OpenAI has never (and may never!) run a profit; that it is one of many A.I. companies working on fundamentally similar technologies; that the transformative possibilities of those technologies (and the likely future growth and importance of OpenAI) is as-yet unrealized, rests on a series of untested assumptions, and should be treated with skepticism; and that Sam Altman, nice guy though he may be, has never demonstrated a particular talent or vision for running a sustainable business.
Nevertheless!
This newsletter is in the business of explaining things that have happened, and things that will happen, and this is a thing that has happened, with consequences for the things that will happen.
Some basic facts/timelines
The four-sentence version of this story is something like:
Sam Altman, the co-founder OpenAI, a cutting-edge machine-learning start-up structured for funny ideological reasons as a non-profit, was fired by the board of directors from his position as C.E.O. for reasons that are still somewhat vague but seem to come down to the feeling that he was not taking “safety” part of the company’s mission to create a safe machine super-intelligence.
Microsoft, OpenAI’s top investor and provider of the platform on which its software works, hired Altman and several other colleagues who had resigned or were threatening to resign because of Altman’s departure, while OpenAI hired Twitch co-founder Emmett Shear as OpenAI’s interim C.E.O.
At least one board member, OpenAI Chief Scientist Ilya Sutskever, apologized for the way he voted, seemingly thanks to the tearful pleas of the wife of another resigned colleague, and signed a letter demanding Altman’s return along with like 90 percent of OpenAI’s employees.
After several days of negotiations, the board reinstated Altman as C.E.O. and resigned (except for one member, Quora C.E.O. Adam D’Angelo), replaced by Bret Taylor and … Larry Summers.
A more detailed timeline follows:
On Friday afternoon, four of the six members on the board of OpenAI fired Sam Altman from his position as C.E.O., apparently over Google Meet. (LOL.)
According to its statement, the board fired following “a deliberative review process by the board, which concluded that he was not consistently candid in his communications.” Mira Murati, the company’s C.T.O., is named interim C.E.O.
Greg Brockman, the company’s president, was also removed from his position, and announced on Twitter that he was quitting. Three senior research scientists followed him in resigning almost immediately.
Following the firing member and OpenAI Chief Scientist Ilya Sutskever told The Information “This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.”
All Friday and Saturday, journalists, V.C.s, A.I. freaks, and other Twitter-addicted degenerates had a major meltdown.
On Saturday night, Altman tweeted “I love the openai team so much.” Prominent OpenAI employees, including Murati, respond with heart emojis. (I know--I know--but this is how these people operate.)
On Sunday morning, Altman visited the OpenAI offices at Murati’s invitation and tweeted a picture of his visitor badge with the caption “First and last time i ever wear one of these.” (Again, I know! But people think this is impressive.)
That afternoon, Bloomberg reported that Altman and the board were engaged in “negotiations” to return Altman to his position, subject to a 5 p.m. deadline set by Altman. The reporting does not make clear who, precisely, Altman was negotiating with or what might happen after the deadline.
At the office Sunday night, Brockman’s wife Anna begs Sutskever to change his mind.
The deadline passes. Late Sunday night, OpenAI announces Twitch co-founder and C.E.O. Emmett Shear as its new interim C.E.O., replacing Murati. The board has still not elaborated on its reasons for firing Altman, but Shear tweets that “The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models.”
At 3 a.m. Monday, Microsoft C.E.O. Satya Nadella announces on Twitter that “Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.”
Five hours later, Sutskever tweets “I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.” Altman responds with, yes, heart emojis.
Around the same time, OpenAI employees publish an open letter demanding that the board resign and Altman return as C.E.O. By the afternoon, 90 percent of employees have signed it, including Sutskever.
On Tuesday, Bloomberg reports that “Sam Altman, members of the OpenAI board and the company’s interim chief executive officer have opened negotiations.” If you are wondering why they are “opening negotiations” now when there were supposedly negotiations “ongoing” the previous weekend, join the club. This time, Bloomberg reports that Altman is specifically negotiating with board member Adam D’Angelo, C.E.O. of Quora. Shear is reportedly telling people “he doesn’t plan to stick around if the board can’t clearly communicate to him in writing its reasoning for Altman’s sudden firing”
On Wednesday, OpenAI announced that Sam Altman would be re-hired as C.E.O. and the board (including Altman) would be replaced by a three-person board of Salesforce C.E.O. Bret Taylor, economist Larry Summers, and D'Angelo.
That, I think, catches us up to the time of publication. But it does not answer any of our important questions: Who are all these people? Why did this happen? What does it all mean?
Unfortunately, it is a bit annoying to explain it all to normal people. To fully understand what happened at OpenAI you need to have some understanding of the people involved and the culture of A.I. research and development in Silicon Valley, and the culture of A.I. research and development in Silicon Valley combines the fractious religious-political disputation of early Christianity with the hothouse interpersonal drama of high-school drama techs, and even someone like myself, who is professionally compelled by the subject matter, and personally compelled by a love of drama, is not quite sure how best to explain the the positions and entanglements of the various players and institutions in the field.
How OpenAI works
In this case--and believe me, I know how boring this sentence is about to sound, but stick with me, there is some funny stuff coming later--it might be best to start with OpenAI’s weird corporate structure. This is a diagram of ownership and control of OpenAI’s various constituent units; trust me when I tell you that this diagram is actually quite funny:
What are we looking at here? How does something end up like this?
OpenAI was founded as a nonprofit in 2015 (by a number of Silicon Valley luminaries, including Altman and Elon Musk) “with the goal of building safe and beneficial artificial general intelligence3 for the benefit of humanity.” As even normies may have gathered at this point, the culture of A.I. research, even in Silicon Valley, places an enormous amount of weight on “A.I. safety,” a broad term that can ecompass reasonable questions like “how might these systems be implemented by state actors?” to not-insane but sort of beside-the-point questions like “what if someone makes a deepfake of Obama?” to somewhat far-fetched questions like “what if a far-future A.I. tortured exact software copies of our personalities as punishment for not helping it be born?”
OpenAI was an attempt to bring some of the tech industry’s resources to bear on the task of building a super-intelligence before a government or corporation built an evil one. Its nonprofit status supposedly ensured that the effort made safety, rather than revenue, its priority. This was many years before the impressive results of recent large language models like GPT4, and there was both relatively less competition in the space and less clarity on how the technology might be commercialized. The idea was, basically, “what if the Ford Foundation created Skynet instead of Cyberdyne.”
But in 2019, after several years of building and releasing A.I. models and experiments and raising money from tech bigwigs and Silicon Valley institutions, OpenAI announced it was changing its structure. The company claimed that it had become clear that “donations alone would not scale with the cost of computational power and talent required to push core research forward,” and even a well-financed non-profit like OpenAI couldn’t compete for talent and resources with Google or Facebook. So the nonprofit announced the formation of a “capped profit” subsidiary company, which could act like a more normal tech-industry startup--raising outside investment and distributing shares to employees--but which was, crucially, still controlled by the nonprofit entity and the nonprofit’s board.
That explains the rightmost two-thirds of the above diagram: OpenAI Global, the capped-profit company, is owned by a holding company, which is itself jointly owned by OpenAI employees and the nonprofit body OpenAI, Inc., which is controlled by the board of directors, which, as of last Thursday, consisted of three OpenAI co-founders:
The board (dramatis personae)
Sam Altman, co-founder and (now former) C.E.O. Before OpenAI, Altman was the president of the prestigious startup accelerator Y Combinator, a successful investor, and the founder of a failed 4square clone called “Loopt.”
Altman is not what they call a “technical founder,” which is to say that he is not personally a machine-learning genius, but he is an extremely well-liked and well-networked figure among venture capitalists. Before OpenAI, Altman’s major accomplishments mostly involved charming and impressing older investors and accessing their deal flow, but in his capacity as C.E.O. of OpenAI he has transformed himself into a kind of beloved mascot for the A.I. industry, and especially the sub-faction of the broad machine-learning ecosystem that backs so-called large language models like OpenAI’s GPT as forerunners of AGI, if not as AGIs themselves.Ilya Sutskever, co-founder and Chief Scientist. Sutskever left Google’s A.I. division Google Mind for OpenAI.
Sutskever is highly regarded as a machine-learning researcher, but (and you can imagine how this might affect his decision-making) his reputation rests more or less entirely on the extremely impressive things he’s accomplished at OpenAI in concert with Altman and Brockman.
The other thing to understand about Sutskever is that he is almost stereotypically a “weird semi-religious-about-A.I. guy.” As a former employee, Scott Aronson, told the A.I. X-Risk podcast last year:“I’d tell him about my progress…and he would say, ‘Well, that’s great, Scott, and you should keep working on that. But what we really want to know is how do you formalize what it means for the AI to love humanity?’”
Greg Brockman, OpenAI co-founder and (now former) President. Brockman left his position as C.T.O. of Stripe to be the founding C.T.O. of OpenAI; he was in charge of recruiting the first batch of employees.
Sutskever was the officiant at Brockman’s wedding to his wife, Anna, who would later tearfully beg Sutskever to change his mind at the OpenAI offices.
And three outside members:
Adam D’Angelo, co-founder and C.E.O. of Quora.4 A high-school friend of Mark Zuckerberg’s and early Facebook employee. Quora recently developed a chatbot product, Poe, that relies on ChatGPT (as well as Claude, a chatbot developed by OpenAI rival Anthropic), the existence of which has led people searching for a semi-rational explanation for Altman’s ouster to blame it all on D’Angelo.
Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology. Toner is a well-regarded figure in the somewhat vague world of “A.I. safety” and the author of a number of research papers on the subject.
Tasha McCauley, Adjunct Senior Management Scientist at RAND Corporation. Toner and McCauley, a former robotics C.E.O. and tech entrepreneur, both have ties to Effective Altruism and Effective Altruism-adjacent institutions like Open Philanthropy. Much more interesting, if rather less relevant, is the fact that McCauley is married to Joseph Gordon-Levitt? Wonder what he thinks of all this!!
But wait! you ask, after careful study of the entire diagram. What about Microsoft? Well, if you are asking that question you are already ahead of where OpenAI’s board was last Thursday.
What about Microsoft?
Shortly after contorting its corporate structure around the capped-profit LLC, OpenAI announced that Microsoft was investing $1 billion in the LLC, licensing the company’s technology (up to but not including the point of AGI, whatever that would be), and providing OpenAI with a computing platform.
In the immediate term this was a mutually beneficial arrangement: OpenAI would get money and resources to hire workers and develop software, and Microsoft--until recently seen as behind its tech-megacorp peers--would get access to cutting-edge machine-learning tech, which it could use for some theoretical/eventual revenue-deriving service such as the re-animation of Clippy.
But in the long term: Huh. If you you have any experience working at normal nonprofits, that is probably what you are saying: Huh. An unfortunately true fact about nonprofits is that no matter their formal mission and legal structure, at the end of the day the direction and priorities of the organization can be set by the donors who cut the biggest checks. Now imagine you have a nonprofit whose main project is not “promoting the arts” or whatever but instead “building an exciting piece of software but specifically not profiting from it,” and whose main donor is not “a rich guy” but “a company that sells software for a profit.” And also imagine that you have structured the nonprofit so that this “donor” is actually legally an investor who can expect some kind of return. Congratulations: you have successfully imagined an extremely unstable nonprofit.
To be clear, doesn’t mean that Microsoft was, as of its investment, calling the shots--just that OpenAI was now practically obligated to attend closely to Microsoft’s preferences, and Microsoft was, in turn, practically obligated to attend closely to the board’s preferences.
Microsoft was, one imagines, aware of this tension when it signed on, and no doubt several of the company’s thousands of lawyers said--in a positive tone indicating that despite their reservations they were team players--“this company is structured in a very interesting way and legally we have very little recourse if they decide to explode themselves.”
But the board is also just six people, three of whom were close, longtime colleagues at OpenAI. I have no evidence that Sam Altman and Microsoft C.E.O. Satya Nadella ever spoke specifically about this, but it is extremely easy to imagine Altman making assurances to Nadella: [Sample Sam Altman dialogue heightened for dramatic effect.] The board will not be a problem. It’s me, Greg, Ilya, and three randos. The only way I could be outvoted on the board is if my president or my chief scientist betrayed me. And that, my friend, will never happen.
Why OpenAI is like the way it is
The point of having a board of directors that can legally (if not stably) fire a C.E.O. and tell Microsoft “thanks for the Azure credits, now go fuck yourself” is that you believe there are higher principles than liquidity events. In the case of OpenAI those principles are something like, one, “build a super-intelligence inside the computer” and, two, “but a good one.” This is, to me, an “interesting” mission, in the same way the organizational diagram reproduced above is “interesting.”
Some context: The subgroup of A.I. researchers, investors, and enthusiasts who believe strongly in the imminent arrival of AGI is often glibly divided into two camps: “doomers” or “decels,” who think AGI is existentially dangerous and must be retarded, if not avoided, and “boomers” or “accelerationists,” who want to move full-steam ahead with A.I. development, regardless of consequences.
In highly partisan and negatively polarized discursive spaces like Twitter, the “decel” camp is associated (somewhat unfairly) with terms and ideas like “A.I. safety” and “effective altruism,” while the “boomer” camp is associated (also unfairly) with terms and ideas like “techno-optimism” and “e/acc.” (More on the subject here.) Much of this “debate,” such as it occurs on Twitter among partisans, has almost nothing to do with machine learning as such, and is probably better regarded as a mutation/continuation of SJW/Gamergate wars, with attendant political implications.
Altman’s firing has already been filtered through the terms described above. Here, for example, is Coinbase C.E.O. Brian Armstrong complaining that “some EA, decel, AI safety coup” had “just torched $80B of value” and “destroyed a shining star of American capitalism.”5
But the Twitter debate masks that most A.I. researchers’ attitudes and beliefs about A.I. don’t always track neatly into the two ready-made camps invoked by Armstrong. For one thing, many, many A.I. scientists believe AGI is impossible, extremely distant, or so poorly defined as to be irrelevant; and many are concerned with certain kinds of “safety” but not others, and are “accelerationist” or not depending on the company, the model, the situation, etc.
More to the point, OpenAI’s core values and mission are themselves both “accelerationist”--in that the company is actively seeking to create AGI before anyone else--and “doomer,” because it believes there is a strong possibility that AGI could destroy the world. To illustrate this, allow me to share two paragraphs from Karen Hao and Charlie Warzel’s Atlantic article on the split:
Anticipating the arrival of this all-powerful technology, Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was “feel the AGI,” a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!” The phrase itself was popular enough that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack.
The more confident Sutskever grew about the power of OpenAI’s technology, the more he also allied himself with the existential-risk faction within the company. For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles.
An organization with this kind of culture is likely to be populated at all levels--from new employees to hand-picked board members--by both “accelerationists” and “decelerationists,” “boomers” and “doomers.” This is a manageable tension, but it’s still a tension, and importantly it’s one located within the guidelines of the organization itself.
Why was Sam Altman fired and then re-hired?
We still don’t know the precise “lack of candor” that led the board to fire Altman, beyond reporting and quotes that suggest some vague miscommunications and incompatibilities around speed and commercialization--with Altman seeking to grow quickly and release more often, apparently against the board’s wishes or instincts. A recent New York Times article outlines some of the growing tensions on the board. One that stuck out to me was Altman reprimanding Helen Toner for referring to OpenAI in a research paper she’d published:
A few weeks before Mr. Altman’s ouster, he met with Ms. Toner to discuss a paper she had recently co-written for Georgetown University’s Center for Security and Emerging Technology.
Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times. […]
Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said.
I strongly recommend you read the paper that pissed off Altman so much, or at least Ctrl-F “OpenAI” and skim it. It’s wildly complimentary! If I’d written that paper and Sam Altman had complained to me about it… I’m not saying I’d agree to fire the guy, but I’d certainly question his commitment to safety, and possibly his reading comprehension skills as well. If I were paranoid I might also wonder if Altman was trying to get me off the board to consolidate his own power on it.
The Times piece also helpfully reports that this is actually the third (but first successful) coup attempt against Altman at OpenAI, following a 2018 attempt by Elon Musk to take control of the company:
Mr. Sutskever’s frustration with Mr. Altman echoed what had happened in 2021 when another senior A.I. scientist left OpenAI to form the company Anthropic. That scientist and other researchers went to the board to try to push Mr. Altman out. After they failed, they gave up and departed, according to three people familiar with the attempt to push Mr. Altman out.
Look, institutionally speaking, the endless drama of this weekend and beyond is a product of the two above-described constitutive tensions coming to a head in quick succession: Initially, the cultural tension over “safety,” and then, as a consequence, the structural tension over power and control. It seems inevitable that these tensions would have forced themselves to be resolved no matter who was in charge of OpenAI.
But one of the funny ways the events have played out is as a referendum on the character of Sam Altman. In part this is due to the nature of the story: The board essentially called him a weird liar. And in some sense this story overall should be a referendum on the character and skill of Sam Altman because Sam Altman was the guy who founded the company and Sam Altman was the one who pushed the stupid nonprofit structure and all of that came back to bite him in the ass.
But I also think that Altman and his allies would like the public understanding of this dispute to be a referendum on Altman’s character, because Altman is widely liked in V.C., A.I., and tech-media circles. As the tech reporter Teddy Schliefer wrote on Threads, Altman is accessible and charming and cultivates relationships with reporters; he’s also good at building his network among founders, investors, and executives. His emergence as an A.I. figurehead--a cautious accelerationist prone to inside-joke tweets--has given him a fanbase outside of the venture capital world. And, to his absolute credit, he has apparently so well-liked by his OpenAI employees that a vast majority of employees have promised to leave for Microsoft unless he is reinstated.6
But if you are not tuned to the Altman frequency--if you are not the kind of A.I. enthusiast for whom Altman represents an inspiring hero, or the kind of aging venture investor captivated by Altman’s fluency, or the kind of Silicon Valley founder to whom Altman has been nice--and, cards on the table, I am none of those things--it can be a bit hard to see what, exactly, is so impressive about the guy. Indeed, if you mostly know him for his tweets and blog posts you might think to yourself: this guy seems like a kind of dull mind.7
“Being liked,” in particular by important people is, as I suggested above, Altman’s chief skill, and I mean that only slightly dismissively. If you are leading an organization and trying to raise money, “being liked” might be the most important skill you can have! But many people are good at being liked. Altman’s main claim to fame prior to OpenAI was being so impressive to the Silicon Valley investor and blogger Paul Graham that Graham wrote a blog post describing Altman as one of the “five most interesting founders” he knew, up there with Steve Jobs. The company Altman had founded, Loopt, failed, but Graham hired Altman to run Graham’s prestigious accelerator Y Combinator, launching both his career and his fortune.
Interestingly, Eric Newcomer reported a few days ago that “Altman’s departure from Y Combinator was more contentious than publicly understood”:
Altman, then YC’s president, was asked to leave, a source told me. He left YC without any affiliation even though initially Altman was supposed to be chairman of YC or at least an advisor. Some similar issues were at play. At YC, Altman was distracted with OpenAI; he invested aggressively alongside the accelerator; he wanted to expand dramatically, even trying controversially to launch YC in China (a decision the subsequent YC president reversed); and YC’s brand was becoming synonymous with Altman. At OpenAI, Altman has reportedly been talking about starting a hardware company with Masayoshi Son and Jony Ive; he’s pushing for faster and faster expansion of OpenAI; and he’s certainly become the company’s almost singular figurehead. Altman seems to have an almost insatiable appetite to increase his own power and influence. Bloomberg reported that Altman was raising billions for a new chip ventures.
What is my point here? Well, I guess I mean that it seems very possible that Sam Altman is a compelling leader, generous networker, impressive speaker, etc. and also a weird liar, intense in a bad way, annoyingly ambitious, etc. A more generous [???] way of putting this might be to say that he has “founder brain,” which is to say he wants to take his little start-up to global dominance through sheer force of will and maintain complete control throughout, and if that means cutting some social corners or steamrolling researchers and ethicists over their “qualms” in the process, well, it’s the price of success.8
And if you are a board member of the OpenAI nonprofit tasked with “ensuring that this company does not create a bad thinking computer” you may at some point decide that “founder brain” is incompatible with the work your organization is supposed to be doing. “I kinda don’t trust him” is not necessarily a good reason to fire someone from a normal job like McDonald’s or president; but it’s absolutely a good-enough reasons to fire someone from the job of “accelerating the construction of an uncontrollable global superweapon9 but in a good way,” at least if that’s how you understand your job.
But if you are an employee looking to devote your life to creating AGI, “founder brain” might be exactly what you’re looking for. And if you are investor, having staked a lot of money and a whole set of institutional reputations on a particular bet, the absurd, sociopathic dedication of “founder brain” is precisely what you’re looking for.
I think this is kind of the crux of the story. For a long time now a cultural conviction about “A.I. safety” has held sway among A.I. researchers in Silicon Valley--such that even money-hungry investors were willing to launch an A.I. research team as a dedicated nonprofit rather than a “traditional” startup. But as some machine-learning has advanced--in particular, the large language models pioneered by OpenAI, and the seemingly obvious ways they might be used to save on labor costs--the safety-focused structure and cultural imperative creates increasing fiction with the structures and cultural imperatives of venture investment and the tech industry more broadly. Wide-ranging, often ridiculous ideas about existential threats from A.I. were allowed to proliferate (or even encouraged!) when they weren’t costing anyone money. But when they start to come between investors and their payout, something has to give.
Some still-unanswered questions
What was Joseph Gordon-Levitt doing while all this was going down?
Further reading
“Inside the Chaos at OpenAI,” Karen Hao and Charlie Warzel, The Atlantic
“OpenAI’s Misalignment and Microsoft’s Gain,” Ben Thompson, Stratechery
“Give OpenAI's Board Some Time. The Future of AI Could Hinge on It,” Eric Newcomer, Newcomer
“Before OpenAI Ousted Altman, Employees Disagreed Over AI ‘Safety,’” Jon Victor, Stephanie Palazzolo, Anissa Gardizy and Amir Efrati, the Information
“Who Controls OpenAI?,” Matt Levine, Bloomberg
“OpenAI’s Oops d’Etat,” Bryne Hobart, The Diff
“The Case Against Sam Altman,” Evan Armstrong, Every
“The Perpetual Rise of Sam Altman Takes an Unexpected Turn,” Ellen Huet, Bloomberg
“Sam Altman and OpenAI Are Victims of Their Own Hype,” John Herrman, Intelligencer
You’ve been reading READ MAX, a newsletter guide to the future. If you found this piece useful or edifying, please consider subscribing. We are fully supported by paying readers who share our specific brain disease, or at least suffer from some adjacent brain disease.
More Read Max writing about A.I. can be found here:
Unless one of the following rare conditions applies:
You are Sam Altman.
You work with or for Sam Altman.
You are Microsoft C.E.O. Satya Nadella.
You are a person who works in or with commercial generative artificial intelligence applications.
You run a high-quality Substack newsletter or perform some other job whose main obligation is create topical content.
It is objectively insane that The New York Times led its homepage for several days with a story about the founder of Loopt getting dumped by his dysfunctional nonprofit. There are two wars going on right now! At least!
“Artificial general intelligence” has never been consistently or clearly defined by OpenAI; company material and statements variously describe AGI as “systems that are generally smarter than humans,” “highly autonomous systems that outperform humans at most economically valuable work,” “the equivalent of a median human that you could hire as a co-worker,” and “magic intelligence in the sky.” “Safe and beneficial” is similarly vague: when asked how he might assign values to a hypothetical A.I., Altman responded:
One idea, Altman said, would be to gather up “as much of humanity as we can” and come to a global consensus. You know: Decide together that “these are the value systems to put in, these are the limits of what the system should never do.”
The audience grew quiet.
“Another thing I would take is for [Buddhist monk/psychologist/mindfulness guru] Jack” — Kornfield — “to just write down ten pages of ‘Here’s what the collective value should be, and here’s how we’ll have the system do that.’ That’d be pretty good.”
To me, personally “C.E.O. of Quora” is just an extremely funny thing to be, like I suppose I often forget that people actually go to work at Quora, it is a whole company, and not a thing that emerged from the internet fully formed and unbidden.
Armstrong is a legendary dummy but there is something particularly funny to me about him describing this insane cult nonprofit as a “shining star of American capitalism” whose board will be “sued to high heaven.”
I am trying to be fair here, because I do think the support shown by the open letter is an impressive testament to Altman’s leadership, but let’s also be real: How many of those OpenAI employees are signing that letter out of absolute loyalty to Sam Altman and how many are signing it because they know there are no real consequences and they would rather have their job return to basically what it was last week than have to go work at Microsoft? (And how many of them are signing, or have become Altman loyalists, because their wealth or retirement is tied up in shares of the capped-profit structure?)
This is, for whatever it’s worth, part of what makes the whole episode so funny: All these tweets about Altman as the single most important person in the development of AGI and all I can think is: him?
To which I say: OK bitch well next time maybe don’t make your start-up a nonprofit controlled by freakazoids.
I want to make clear that I am not endorsing the idea that OpenAI was developing a potentially omnicidal intelligence, or that any computer intelligence could actually represent an existential risk, just attempting to describe how certain people might understand the cost-benefit analysis.
It never won't be funny that Joseph Gordon-Levitt is still running a crowdsourced, mostly nonpaying twee art factory amidst all his tech connections...
This cursed ass lead photo