Why I cover A.I. the way I do, good obscure action movies, and other questions from readers
Mailbag 12/08/23
Greetings from Read Max HQ! This week’s newsletter is a mailbag, as foreshadowed by Monday’s call for mail. I’ve picked out a handful of questions from email and comments and answered them below--as always, I had more questions than I had time or prudence to answer, so I’ve saved some for future editions. Today’s questions include:
Why does The New York Times cover A.I. the way it does?
What are some good obscure action movies?
Will humans have relationships with bots in the future?
And more!
A reminder that I’m still soliciting links for Read Max’s third annual “Year in Weird and Stupid Futures” newsletter: Send me your favorite stories from this year that reflect how strange, funny, fraudulent, shoddy, unexpected, and weird the future is. For examples of the kind of stories I mean, here’s the feature for 2022 and the one for 2021. As above, leave your links in the comments below or email them to me at your leisure!
Finally, a reminder that Read Max is supported entirely by paid subscriptions. If you like what we’re up to and want to see more of it, please consider throwing us a few bucks:
Now, the questions:
Here’s my question although it’s tiresome. How come the New Yorker, Times etc do fawning AI coverage where the journalists go to Algebra skool and accept Hinton’s premise that Gawd tier AI is inevitable in our life?
Meanwhile, why does your vaunted newsletter, Chuck Warzel (Atlantic), and even Nilay Patel have an occasionally skeptical eye regarding “AI”?
I’ll admit I’m weak and often vacillate betwixt utter terror and ennui. I understand the past hype cycles have been stock-driven scams (full self driving etc), but, respectfully, how do you stay nonchalant when Mike Isaac or heavily fact checked New Yorker go kind of the other way?
I would never impugn you by calling you contrarian or reactionary against AI hype I just honestly don’t know what to think.
Finally, it’s really weird to me that Larry Page just disappeared and at lot of this was born from their big drama fest.
-- john
I don’t think this is a tiresome question at all! Or maybe it’s just that I’m tiresome, and tiresomely interested in questions about how and why media outlets arrive at coverage decisions.
Let me first challenge some of the premises, though. Is The New Yorker particularly fawning in its A.I. coverage? When I think of New Yorker pieces on A.I. I think of Ted Chiang’s essay “ChatGPT Is a Blurry JPEG of the Web,” which was skeptical of A.I.’s power to a fault. And while I agree that a great deal of Times coverage of A.I. has been overly credulous, I think overall it’s more of a mixed bag than you might imagine, and Mike Isaac (who is, full disclosure, a friend) is one of the paper’s more sober-minded reporters on the subject.
Something that’s often underrated by people who haven’t worked as journalists is the extent to which newspapers and magazines (and cable channels, etc.) are institutions like any other--which is to say riven with contradiction and internal tension, populated by cross-pressured workers and managers with competing desires and incentives. Sometimes stories are written and published for righteous Hollywood reasons like “newsworthiness” or “truth,” sometimes for cynical reasons that savvy commentators like to cite, like “ideology” or “commercial concerns,” but coverage decisions also just as often driven by prosaic concerns like “careerism,” “laziness,” “stupidity,” “accident,” “trying not to get fired,” “need something to fill a particular spot at a particular time.” Most often, obviously, stories are commissioned, assigned, reported, written, edited, and published for a volatile combination of all of the above reasons.1
So when you see a story reach the front page of The New York Times, it may be that the story is genuinely the most important in the world. Or it may be that the story is extremely likely to draw a larger audience. Or it may be that the reporter writing it or the editor assigning it or the editor placing it on the homepage know that the story’s subject is particularly important to their boss, or their boss’s boss, and that assigning it/writing it/prioritizing it will advance their career.
I say all this as a way of answering the topmost part of your question: Some reporters and editors at prestige publications like NYT and NYer do fawning A.I. coverage because they genuinely believe the facticity of what they’re writing, but also they do it because they believe this particular framework for writing about A.I. will be popular, and also for all kinds of hidden institutional reasons like a need to develop credibility with sources or, say, because you work several layers below a publisher who has social ties to or a particularly committed obsession with the tech industry, and to advance and protect their careers all of the managers above you are (semi-consciously) incentivizing this kind of coverage.
And the same is true at The Verge and The Atlantic and even little ol’ Read Max, which, while not precisely an institution is certainly embedded and cross-pressured within a larger media ecosystem. I come by my opinions honestly, which is to say by reading a lot and then agreeing with the people who seem like they would be the least exhausting to have a beer with, but that honesty emerges out of my own pre-existing ideological commitments and is structured not only by the commercial incentives of this newsletter but also by various practical demands, such as the need to publish once a week. Often the answer to the question “why did Max argue this particular thing about A.I.” is “well, it was Thursday, and I’d been banging my head against the wall for a week and just needed to get something up without thinking too hard about it.”2
Anyway, I think this also points at the answer to the final part of your question, to wit: I feel pretty comfortable disagreeing with some of the positions--about A.I. or about anything--taken up at respected and high-quality mainstream fact-checked publications because, well, I know that they publish a lot of stupid stuff for reasons that are often out of any one person’s control, and the fact of publication in (say) the Times is not in and of itself a reason for trust.
…but I suspect I’m not really answering the real object of your question, which is something like, should you be scared? Which, dude, I don’t know, I’m just a guy. But whatever you do don’t take the Times’s word for it.
It's been a great year, thank you for all the book recommendations!
Aegypt by John Crowley was especially on-the-nose for me.
Along the same occult-fiction lines I've always wondered if you've read or had thoughts on Alan Moore's From Hell. Or any Alan Moore's fiction in general.
Happy Holidays!
-- Owen
Yes! The expanded Vertigo Comics creative universe of the ‘80s and ‘90s (I know Moore didn’t specifically write for Vertigo) is an important cultural touchstone for me. (I wrote a little about some of my favorites while doing a non-comprehensive re-read of a bunch of Swamp Thing and Animal Man comics last year.) I love From Hell, though I don’t think I have anything particularly smart to say about it, except that I think it’s probably Moore’s best works--one of those grim, fucked-up, occult-conspiratorial masterpieces that only Britons seem able to really pull off. I’ve never read his non-graphic novel, Jerusalem, though I’ve always wanted to; I did read his America’s Best Comics series Promethea, which I enjoyed as a Moore fan but is a bit more like a set of dry lectures about hermeticism than the hypnotic fever dream of From Hell. I’m going assume that if you’ve read From Hell you’ve also read Grant Morrison’s The Invisibles, but if you haven’t and are looking for another hermetic-anarchist occult-conspiracy comic, it’s a total classic.
Do you think it’s inevitable that our future will include authentic relationships between humans and bots? What moral or ethical questions does that possibility bring up for you? How do we prepare?
-- Dizzy Zaba
I like this question because I think it’s a better way of asking about “A.G.I.” (“artificial general intelligence,” which sort of just means the same vague “sentient/conscious computer” thing that “artificial intelligence” used to mean before “artificial intelligence” started to just mean “machine learning.”)
In general I think A.G.I. discourse in the press and on Twitter--even among sophisticated practitioners in A.I. research and development--tends to speak a little too shallowly about A.G.I. as a kind of achievement or endpoint that can be gauged scientifically. But because A.G.I. has no universally accepted scientific meaning3 it seems pretty obvious to me that it’s a social (and political!) concept, not scientific or technical (or even philosophical one. To this end, to imagine A.G.I. in the Frankenstein sense--a being awakened at the flip of a switch--seems wrong to me. [Last living humanities major voice] We will “reach A.G.I.” not when a model is developed that beats one set of arbitrary tests but when enough people agree that Computer Is Alive for it to become the hegemonic position.
Put another way, what is going to matter when it comes to “reaching A.G.I.” is not so much what computer can do, specifically, but how we all feel about it. That’s why I like asking about “authentic relationships between humans and bots” instead of performance benchmarks: it refocuses the idea of “A.G.I.” on the social aspect of “intelligence,” which is in this case the most relevant.
But I still haven’t answered the question, because … I don’t know! I suppose I don’t think it’s “inevitable,” but I don’t think it’s out of the question, either; humans have shown great flair over the past couple millennia for developing “authentic relationships” with all kinds of things that aren’t other humans--animals, gods, pop stars, nature, etc.--so why not bots as well? As the case of the Google guy who tried to free LaMDA shows, some humans already seem to have authentically felt (if not authentically reciprocal) relationships with bots.
In general I don’t think that “our complicated and ever-evolving social-societal relationship to animals” is the worst guide for thinking about how we might relate to A.I. in some distant future. Not, to be clear, because I think A.I. is or will eventually be “alive,” but because animals are already the precedent and model for how we relate to nonhuman cognition and communication. Do we have “authentic relationships” with our pets? Are those relationships “reciprocal”? Even without a real scientific consensus on the extent of sentience (or consciousness or intelligence) in most animals, humans have managed to stake out a diverse array of political and ethical positions regarding their care while also depending (on a societal level) on their exploitation and death in a vast factory farming apparatus. It’s not hard to imagine PETA equivalents for A.I. (arguably they already exist), nor is it hard to imagine people who--like too-intense pet owners--build relationships with bots from which they derive great meaning but which others find kind of alienating.
As for moral and ethical questions, I don’t know, that seems a bit above my pay grade. (My affection for the A.I.-animals metaphor notwithstanding, I would not join the A.I. PETA; lack of flesh-and-blood embodiment is a big hurdle for me to get over when it comes to caring about chatbots.) My questions are mostly regulatory, really, and they’re basically the same as the questions I have about LLMs and other machine-learning models in general: who owns the relationship bots? What are they trained on? Can we understand how and why they form their answers? The best way to “prepare” for A.G.I. (or whatever is coming) is to keep its development and deployment under open and transparent democratic control.
Years ago, I read The Traitor Baru Cormorant by Seth Dickinson after you mentioned it on one social media feed or another, and I loved it. Wondering if you ever read Dickinson's followups, Monster and Tyrant, and what you thought about the change in style, pacing, structure, etc.
-- Peter
I recommend The Traitor Baru Cormorant to all fantasy enjoyers I meet; it’s one of the best SFF novels of the 21st century. My admiration for the sequels is somewhat more tempered. They retain two of the best things about Traitor--the original and inventive world-building (I love the cancrioth) and the commitment to an at least semi-materialist political economy--but they lose the briskness of plot (and, in a funny way, the intensity of stakes) that makes the original so memorable and gripping. Bluntly, I wonder if they should have been edited down into a single book, though on a practical level I’m not sure how that would’ve been done--either way it’s very hard to blame Dickinson, who I think is a pretty singular talent, for falling prey to the same sprawling-world problems that plague almost every fantasy series. He’s got a new sci-fi novel coming out next month; I’m very excited for it, and I hope the break from the Cormorant universe (assuming he took one) has been rejunvenating.
Dear Max, My father-in-law only watches capital-A Action movies, heavy on explosions and aggrievement, light on dialogue. He affectionately refers to them in the household as "boom-booms." Things like your Bourne Identities, your Missions Impossible, Luc Besson joints, John Wicks, Matrices, etc. The problem is that he's seen damn near all of them and won't re-watch anything, even if he last saw it 30 years ago. Do you have any obscure Boom Boom Recommendations?
-- Wintermute
I’m not sure if these qualify as “obscure,” but if he (and you) have already made it through the big-studio Hollywood boom-booms, I might start looking at the world of direct-to-video boom-booms? The guy you want to check out is the director Jesse V. Johnson and his muse, Scott Adkins, who make nicely paced, not-too-cheap-looking, bone-crunching shooters and martial-arts movies--in particular Avengement, the Debt Collectors series, and the WWII thriller Hell Hath No Fury. (I hear his new historical epic Boudica is pretty good too but I haven’t seen it yet.) Another director to look for is William Kaufman, who makes these reactionary little military movies (like the name-only sequel Jarhead 3) and cop thrillers (I’ve read decent things about his new one, The Channel). Also in this vein: Peter Hyams’ legendary Universal Soldier sequels, which are beloved by DTV action and sci-fi perverts.
Of course depending on his tolerance for subtitles you may be even better off looking to Asia--has he watched through the classic John Woo and Johnnie To movies? Or The Raid or Ong Bak? I have been wanting to get into Japanese V-Cinema (its equivalent to DTV) for a while, too--supposedly Hydra and Bad City by Kensuke Sonomura are pretty good.
Most of my HS students have fully shifted to group chats and/or discord as their ways of communicating both with each other and with the world at large. While they obviously still use Instagram (often in DMs, essentially a messaging app) and TikTok, they are using them less as a means to socially interact and more as a passive stream of content. As Twitter and Facebook continue to die off, it seems like this is a return to a more fragmented web of the aughts. How will this impact both the nature of the web as a thing and culture at large? Is it even fair to create this periodization to begin with? And what server or groupchat will spawn this Internet's goatse?
-- Constantly LARPing
This is the kind of question that probably requires a full post to think through, and even then I wouldn’t be sure of my answer. I do basically agree with your periodization, though I’m still on the fence about what the dynamic actually entails--I’ve written a little bit about “friction” as a quality of social networks and other community spaces, which I sometimes think is a slightly better way of articulating the shift than open/closed or centralized/fragmented. I think the thing I’m interested in tracking for the next while is whether increasing friction/fragmentation/closure slows the spread of internet culture and message-board dynamics in real-world processes and institutions, if that makes sense. I am still shocked at the way Twitter swallowed and subsumed a huge amount of elite media and politics in the 2010s, and I wonder if that kind of thing can still happen when there’s no central depot for interaction in the same way.
Thank you for joining us for this mailbag--if you read all the way through, presumably you enjoyed what you were reading, or are a weird masochist--either way, please consider subscribing! Despite all appearances, this newsletter is a full-time job, supported entirely by paying readers, and every subscriber makes a difference.
I frequently recommend this old blog post by Tom Scocca about what real “editing notes” would look like for a magazine as a way of getting at what I’m talking about. The main thing to understand, and that Scocca gets across so well, is that while a magazine might project a kind of bulletproof perfection--every story the product of an assiduous and painstaking reporting, editing, and fact-checking process--in practice nearly every story you read is an improvised and often deeply unsatisfying last-minute compromise between the publisher, the top editors, the story editor, the fact-checkers, the copy-editors, the art department, and the writer.
Plus I use the newsletter as a means of working out ideas and arguments, so sometimes I will take particularly strong positions just to test their mettle, and I will often contradict myself.
It’s true that there are a number of proposed benchmarks for judging “A.G.I.,” but the very fact that there are several of them, competing for acceptance, should make it clear that “intelligence” is a social (and political!) proposition, not a scientific or technical (or even philosophical) one.
I get the sense that in most major publications you have the AI skeptics, the doomers, the true believers, the centrists, right? The New Yorker also featured There Is No AI by Jaron Lanier, What Kind of Mind Does AI Have? by Cal Newport, The Age of Chat by Anna Wiener. All of which are really thoughtful articles that break down the anthropomorphic spell that AI casts over us.
But yeah, rankles that the NYT lists Eliezer Yudkowsky (a Harry Potter fanfiction writer hailed as a genius by his followers, a laughingstock by others) as a major influence and not, say, Timnit Gebru or Emily Bender or ANY women?
The commenter Angie Wang is being incredibly modest, because she herself created one of the most thoughtful "essays" you'll read about AI, in the form of a gorgeous comic published in (wait for it) The New Yorker. I cannot recommend it more highly.
Is My Toddler a Stochastic Parrot? https://www.newyorker.com/humor/sketchbook/is-my-toddler-a-stochastic-parrot