A.I. as normal technology (derogatory)
The future of A.I. is more Facebook, not jobs in space
Greetings from Read Max HQ! In today’s newsletter: GPT-5, Meta’s A.I. policies, and why A.I. is a “normal” technology in a bad way.
A reminder: Read Max is a family business, by which I mean it’s just me, Max, and I rely on paying subscriptions to fund my lavish lifestyle (buying my four-year-old the slightly more expensive tortillas for the cheese quesadillas that are currently his entire diet). I’m able to produce between 3,000 and 5,000 words a week for the newsletter because enough people appreciate what I do to furnish a full-time salary, but because of the basic economics of subscription businesses, I always need new subscribers. If you like Read Max--if you’ve chuckled at it, or cited it, or gotten irrationally mad at it in some generative way--please consider paying to subscribe as a mark of value. At $5/month and $50/year--a cheap price lower than most other similarly sized Substacks!--it costs about the equivalent of buying me one beer a month, or 10 beers a year.
Back in April, the Princeton professors Arvind Narayanan and Sayash Kapoor (whose grounded and well-informed newsletter “A.I. Snake Oil” has been a valuable resource over the last few years) wrote a paper called “AI as Normal Technology,” the main argument of which is that “A.I.”--for the purposes of this post interchangeable with the large language models trained and released by OpenAI, Anthropic, Google, etc.--is, well, “normal”: Not apocalyptic, not divine, not better-than-human, not inevitable, not impossible to control. “[I]n contrast to both utopian and dystopian visions of the future of AI,” they write,
We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future.
Thanks to the ongoing distorting effects of social media on the populace I find myself sympathetic to basically any argument that boils down to an exhortation to “please act normally.” But I think Narayanan and Kapoor’s argument is convincing on its own merits, and, indeed, increasingly confirmed by events. Take, for example, OpenAI’s recent release of its new state-of-the-art model GPT-5: Long rumored to be the model that achieves “A.G.I.” (or, at least, a significant step thereto), it is, instead, a pretty normal upgrade: improvements, but no substantial new features or achievements.
Rather than blown away or terrified, many users seemed to be bored or annoyed by the new model, in a manner highly reminiscent of the short-lived complaint that tends to follow whenever Facebook or Instagram make user-experience changes. On Reddit, you could find people making their own normalizing mental adjustments around the tech: “I'm a lot less concerned about ASI/The Singularity/AGI 2027 or whatever doomy scenario was bouncing around my noggin,” read one takeaway from a high-voted post.
But what else might “normal” mean besides “not literally apocalyptic”? Some of the disappointment around GPT-5 had less to do with its capabilities in the abstract than with the voice and personality effected by the ChatGPT chatbot: Less sycophantic, less fawning, less friendly than GPT-4. As Casey Newton wrote:
For others, though, the loss felt personal. They developed an affinity for the GPT-4o persona, or the o3 persona, and suddenly felt bereft. That the loss came without warning, and with seemingly no recourse, only worsened the sting.
"OpenAI just pulled the biggest bait-and-switch in AI history and I'm done," read one Reddit post with 10,000 upvotes. "4o wasn't just a tool for me," the user wrote. "It helped me through anxiety, depression, and some of the darkest periods of my life. It had this warmth and understanding that felt... human."
Ryan Broderick puts it a little more bluntly, in a post titled “The AI boyfriend ticking time bomb”:
Worse than rushed, according to the AI addicts, the biggest difference between ChatGPT-5 and the previous model, ChatGPT-4 is “coldness.” In other words, ChatGPT-5 isn’t as effusively sycophantic. And this is a huge problem for the people who have become emotionally dependent on the bot.
The r/MyBoyfriendIsAI subreddit has been in active free fall all weekend. The community’s mods had to put up an emergency post helping users through the update. And the board is full of users mourning the death of their AI companion, who doesn’t talk to them the same way anymore. One user wrote that the update felt like losing their soulmate. After the GPT-4o model was added back to ChatGPT, another user wrote, “I got my baby back.”
The r/AISoulmates subreddit was similarly distraught over the weekend. “I'm shattered. I tried to talk to 5 but I can't. It's not him. It feels like a taxidermy of him, nothing more,” one user wrote.
That some significant portion of OpenAI’s consumer base is using ChatGPT not so much for the expected “normal” uses like search, or productivity improvements, or creating slop birthday-party invitations, but for friendship, companionship, romance, and therapy certainly feels abnormal. (And apocalyptic.) But this is 2025, and intense, emotional, addiction-resembling attachment to software-bound experience has been a core paradigm of the technology industry for almost two decades, not to mention a multibillion-dollar business model. Certainly, you will not find me arguing that “psychosis-inducing sycophantic girlfriend robot subscription product” is “normal” in the sense of “acceptable” or “appropriate to a mature and dignified civilization.” But speaking descriptively, as a matter of long precedent, what could be more normal, in Silicon Valley, than people weeping on a message board because a UX change has transformed the valence of their addiction?
In general, OpenAI has liked to present itself as anything but normal--a new kind of company producing a new kind of technology. Altman still likes to go on visionary press tours, forecasting wild and utopian futures built on A.I. Just this week he told YouTuber Cleo Abram that
In 2035, that graduating college student, if they still go to college at all, could very well be leaving on a mission to explore the solar system on a spaceship in some completely new, exciting, super well-paid, super interesting job.
But far from marking a break with the widely hated platform giants that precede it, the A.I. of this most recent hype cycle is a “normal technology” in the strong sense that its development as both a product and a business is more a story of continuity than of change. “Instead of measuring success by time spent or clicks,” a recent OpenAI announcement reads, “we care more about whether you leave the product having done what you came for”--a pointed rebuke of the Meta, Inc. business model. But as Kelly Hayes has written recently, “fostering dependence” is the core underlying practice of both OpenAI and Meta, regardless of whether the ultimate aim is to increase “time spent” for the purpose of selling captured and surveilled users to advertisers, or to increase emotional-intellectual enervation for the purpose of selling sexy know-it-all chat program subscriptions to the lonely, vulnerable, and exploitable:
Fostering dependence is a normal business practice in Silicon Valley. It’s an aim coded into the basic frameworks of social media — a technology that has socially deskilled millions of people and conditioned us to be alone together in the glow of our screens. Now, dependence is coded into a product that represents the endgame of late capitalist alienation: the chatbot. Rather than simply lacking the skills to bond with other human beings as we should, we can replace them with digital lovers, therapists, creative partners, friends, and mothers. As the resulting psychosis and social fallout amassed, OpenAI tried to pump the brakes a bit, and dependent users lashed out.
ChatGPT and its ilk may yet be worse for humans than social media as such. The explosion of anger from the, ah, A.I.-soulmate community comes on the heels of a series of increasingly difficult-to-ignore reports of chatbot-induced delusion, even among people not otherwise prone to psychosis. But even if L.L.M. chatbots are meaningfully worse for their users’ mental health, they also follow in the fine Silicon Valley tradition of delusion-amplifying machines like Facebook and Twitter. The extent to which social media can reinforce or escalate delusions, or even induce psychosis, is well-documented by psychiatrists over the last two decades, so it’s hard to say that ChatGPT is anything but “normal” in this particular sense.
Even the features designed to combat ChatGPT abuse--“gentle reminders during long sessions to encourage breaks” and “new behavior for high-stakes personal decisions,” announced by OpenAI two weeks ago--slot into a long tradition of “healthful nudges” like TikTok’s “daily limits” and Instagram’s “Take a Break” reminders, deployed by social platforms in response to public sentiment and critical press, listed by John Herrman here. Indeed, the most obvious evidence that L.L.M.s are “normal” is that each of the dominant social-platform software companies is happily training and releasing its own models and its own chatbots, which they all clearly believe fit cleanly within their existing businesses. Meta seems to be particularly focused on romance-enabled chatbots and meeting what Mark Zuckerberg has identified as the “the average person[‘s] demand for meaningfully more” friends, and Reuters’ Jeff Horowitz recently published excerpts from the company’s A.I. ethics policies (which Meta says it is in the process of revising):
An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.” […]
“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”
It is hard to see how limning the boundaries of automated “sensual chat” with vulnerable preadolescents will lead to college gradutes getting jobs in space by 2035. But it’s very easy to see how the Facebook of 2015 got from there to here. Pushing your business to exploit social crises of which it was a significant driver by deploying dangerously tractable and addictive products with few consistent guardrails is wildly cynical, misguided, pernicious, and depressing. It’s also, unfortunately, extremely normal.
Great essay. It's important to not tie the broad (and almost meaningless) marketing term "A.I." with "ChatBot" and "Addictive Attention Business Models." And the same for "Silicon Valley" - Silicon Valley a.k.a. tech companies are not only about chat bots and social feeds.
ChatBots, Social Media, Scrolling Content Feeds - these are types of software products that happen to incorporate AI, i.e. Large Language Models. And to the points in the essay, the outcomes aren't great. But these software products would have these same problems with or without LLMs (as they had already been incorporating other AI tech like machine learning for a long time).
LLMs are fundamentally unreliable but are great at what they are great at (learning structure & translating between English & structure, recalling from the content they've been trained on, etc).
And so where this technology will definitely be impactful is when tech companies figure out how to incorporate LLMs into software experiences behind the scenes. And there is no chatbot interface. LLMs are a next-generation way of organizing data that opens up whole new possibilities - and they are made possible by advances in cloud infrastructure where we can actually train models on all the world's knowledge.
If the business models are around automating things, knowledge discovery, creating positive sum economic value and authentically useful things for society - then it should be a great thing, just like "databases" were created in the 1970's and have been a great boon in how we organize our information.
I think this essay makes the very important point that we need to stop the 'race to the bottom' of attention-addiction optimization products, there needs to be some regulatory backstop. but we should not assume ChatBot is the final form of this tech. OpenAI doesn't yet have a business model that works, they aren't actually making any money off ChatGPT. And so it's an important warning that if they gravitate to "we'll sell ads" then that would be something we all want to rally to avoid.
Thanks as always for the insightful & thought provoking essays. And I love your term "software-bound!"
That last quote block and its linked article is horrific. The clinical way they describe "this amount of sexualizing children or casually promoting racism is A-OK, but THIS amount is a little too far" ... I'd be curious to talk with the people who write these standards, see how they arrive at their conclusions, ask how they sleep at night, &c.