It’s very clear at this point, as the crypto economy collapses around us, that the Hot Tech1 of 2023 is “artificial intelligence.” The evidence is everywhere: Reporters and columnists like the Times’s top tech correspondent Kevin Roose have shifted their focus from the web3 future, now crumbling in the face of 4.5 percent interest rates, to the “new A.I. boom” signaled by generative A.I. applications like the image generator Dall-E and the chatbot ChatGPT. Microsoft is investing $10 billion in OpenAI, the company behind ChatGPT. Conservatives are beginning to whine that the software is biased against them, a sure sign of its prominence. Stanford weirdos are giving terrible, fake nicknames to San Francisco neighborhoods to loudly signal energy and excitement around the software:
Of course, Read Max, despite its unconvincing pose of savvy detachment, cannot be exempt from this “A.I. craze.” People will be talking about a new A.I. future, and to the extent that this newsletter has a mission besides helping me obtain health insurance for my family, it is to “explain the future.”
The problem is that I still don’t really know what to make of “A.I.” If I’m being honest, I’m not even entirely sure what “A.I.” is, besides a sort of tech-media hyperobject, like “web3” or “the metaverse,” a word that inserts itself almost at will into conversations, headlines, panel discussions, meeting agendae, anxiety dreams, etc. Are we talking only about the “generative A.I.” apps that have recently gone viral for their ability to create shockingly competent depictions of things like “Princess Bala from Antz in dirndl at Oktoberfest real life HDR 4K high-res huge honkers”? Do we mean the information-processing systems (facial recognition! Early-warning systems!) that have overtaken bureaucracies for the last several years? Do we mean talking computers that think and feel?
In some ways the A.I. craze feels to me like the inverse of the web3 craze: Web3 had a relatively specific, widely shared vision of the future, but no consumer-accessible proof of concept or example; AI has a set of (without trying to overstate it) immediately impressive consumer-accessible baubles, but no clear shared vision of how that technology will develop or be implemented.2 Where I feel like I was able to get a pretty good sense of web3’s whole deal using the sophisticated process of “just looking even once at a Bored Ape NFT,” A.I.’s whole deal eludes me. Is it cool? Fake? Scary? Stupid? Useless? Transformative? Faddish? Maybe all of the above?
Part of the project at Read Max for the next few weeks or months is going to figure out how we (I) think about A.I., or should think about it. (As a tech-media hyperobject, A.I. is a perfect subject for the flagship Read Max product, the Read Max Report: a hybrid explainer-syllabus-link portal-encyclopedia that contains all of the information a normal person needs to have about the subject, plus some information they don’t, written in a highly biased and idiosyncratic style that is somewhat, but only somewhat, annoying. Previous subjects of Read Max Reports include web3, FTX, and Twitter; to read Read Max Reports and support their production, please subscribe below:
The report will almost certainly be published, in traditional Read Max fashion, a week or two after no one is interested in AI anymore.) To this end, I’m trying to formulate a kind of research program, which for the purposes of this post I’m going to put forward as a series of questions that I personally don’t feel like I have a clear answer to, and want to explore a little bit. I’m going to explain them below; for whatever it’s worth, these seem like questions any journalist or critic writing about “A.I.” should be asking as they write.
What is A.I.?
When we say “A.I.,” what do we actually mean? Apps that produce images or texts? Systems of prediction and control? Neural networks? Algorithms? (And what are those?) Are we talking about “actual” “artificial intelligence,” in the sense of “thinking,” or are we talking about computing-intensive regression models, pattern matchers, etc.?
How does AI work?
There’s an annoyingly obscurantist attitude of fear and awe developing around the current generation of A.I., often fomented and encouraged by the people developing A.I. software. You can see this in, e.g., viral threads about A.I. horror stories, but also in A.I. developers who like to play up A.I. choices as unknowable “magic,” and in certain kinds of aggregated stories about mysterious or inexplicable A.I. decisions. (“This A.I. invented its own language, and researchers don’t know why!”) I will admit that I have enough of an X-Files brain to find this kind of thing emotionally and aesthetically appealing. But I’m also enough of a spoilsport to feel immediately skeptical of any quasi-mystical claims being made about A.I.’s ability. A clearer sense of how “A.I.” (of whatever vintage) works makes it much harder to attribute esoteric motivations to its output. (And much easier to assess what A.I. is and what it’s doing in the larger sense.)
What is new that is making A.I. suddenly so hot?
The immediate answer to this question is, obviously, “Dall-E and ChatGPT are demonstrating advances in generative A.I. to journalists and other normie users.” And maybe it’s just as simple as that! But if nothing else it’s worth historicizing these recent developments to understand better: Why didn’t previous advances in A.I. tech create as much of a stir? What has changed in how A.I. is being developed, marketed, and covered to make it more attractive or interesting now?
What is the scene/culture around A.I./A.I. development?
One of Read Max’s specialities is vibes-based assessment of technological development; as referenced above, it was very easy to assess web3 as wack based on the extremely wack culture and scene that developed around it. But it will be hard to fully and rigorously assess A.I. until I’ve had a chance to explore the scene and culture emerging around A.I. development and determine if it is fatally wack, or extremely off-putting, or actually kind of cool, or whatever it is. What is “Cerebral Valley” like? What about “VibesCamp”? Am I going to have to follow a bunch of freaks on Twitter and tolerate their shitposts to understand the true nature of their desperate little souls? Am I going to have to (gulp) read LessWrong?
Who is making money off of this, and how?
At some point all of us are going to have to confront the fact that the CEO of OpenAI is Sam Altman, one of Silicon Valley’s most baffling titans, a man who somehow parlayed an extremely failed, not even interestingly stupid startup into a reputation (among executives and the kind of journalists who listen to tech executives) as a young genius, despite being reliably one of the dullest people in an industry not lacking for dullards.
Who is being exploited by this, and how?
How many people are on the other end of A.I. systems? What are they doing? How much of what we experience as “A.I.” is “fauxtomation”? How much of what is legitimately “automated” has been pre-arranged, selected, categorized, tabulated by real humans? How much were they paid? How much work are we being asked to do ourselves? How much energy does A.I. require, and how bad is it for climate? Who’s going to get put out of a job, and whose life is going to be made harder, stupider, more annoying, worse? (I actually know the answer to the last one and it’s “all of us.”)
Is A.I. bullshit?
Is this technology as transformative as promised? (For better or for worse.) Is it going to change the world and make a few people zillionaires? Or is it (at best) a party trick or bauble? If it is going to change the world, is it going to do so by making everything harder, cruder, and more suspicious?
These are all live questions to me, and I’m interested both in what readers think of them, and what kinds of questions, confusions, or reservations you all have about “A.I.” I’m also (I always am) soliciting reading recommendations for my own edification -- books, articles, Twitter threads, YouTube videos, whatever that have helped you understand “A.I.,” or at least helped you think it through. Drop me a line at maxread@gmail.com, or leave a comment.
I mean “Hot Tech,” of course, in the sense of “media attention and speculative investment,” rather than, say, “global importance.” In the more rigorous sense the Hot Tech of 2023 is the integrated circuit, for the, what, 60th year running.
Except to the extent that many A.I. investors and other anxiety-prone nerds seem to have a troubling shared future vision of A.I. killing us all someday, for some reason
Even before I understood ALL the ways in which cryptocurrencies were awful in the dumbest of ways, I had this watershed moment where I lost all interest in what had, before that, been hyped up to me as the tech of the future: the realization that all it ever was was money. Just speculation. And anything else being touted around it was just reputation laundering.
I had a similar moment with AI, when I understood that the main driver behind our collective interest in it is just getting people fired. That is it. If you can automate a task, you don't have to pay people to do it. And that is why every startup needs to be AI based, because it means cheaper operational costs. At the risk of being overly reductive, it struck me really hard that to a large or at least definitely non-trivial degree, getting rid of workers is probably the biggest driver behind AI research. Not to have a "brave new world", just a cheaper old & cowardly one.
My Father was an accountant before the invention of Excel and the widespread adoption of calculators (which was not that long ago), he told me that everyone thought they would be put out of work by these inventions and he was genuinely worried he would be laid off, this of course did not happen and I believe "AI" going forward will be as exactly "disruptive" as calculators and excel, there will be terrible things and people that want to do terrible things will have some new tools but it will not fundamentally change anything about humanity