Discussion about this post

User's avatar
Martin Reznick's avatar

I'm not on Bluesky or anything so I don't have perspective on some of the blanket AI dismissals, but I am a tech worker and am very much on the Luddite side of things with respect to LLMs. The Luddite view is frequently mischaracterized as "AI is fake and it sucks" and that's not right at all. But this characterization serves a purpose insofar as it allows the Casey Newtons of the world to sidestep the actual objections entirely. These objections range from the obvious, that the technology only works reliably in very limited domains or where accuracy & reliability aren't that important, or the more subtle, that whatever productivity gains from AI might occur will be be fully captured by the capitalist class.

Klein's interview with Buchanan is telling. Klein pushed him pretty hard about why no one on Buchanan's watch really seemed to plan for the social disruption that would occur should some sort of AGI emerge soon. I think the answer to this is obvious: neither Buchanan nor the people he worked with really believe that a world-changing AI will actually happen. But it never hurts to make a bold prediction: fully self-driving cars in 2016. No, wait, 2019.... Now....who knows.

Instead of fully self-driving cars, we got Teslas covered in cameras constantly running in "sentry mode", surveilling everyone and everything. *This* is the Luddite objection. Hype, promises, massive over-investment, extra surveillance, cars unsafe for pedestrians, and massive asset price inflation for the benefit of the few.

There is also some conceptual slippage going on: when I, a Luddite, say AI is fake, I refer to the AI industry, whose upside-down economics are premised the magical thinking that LLMs will lead to AGI, loosely defined. Or according to another view, the AI industry is essentially a fossil fuels & data center political economy play where the actual models are sort of secondary.

It's self-evident that there is real technology there and it can be passing useful for people in some ways. I used it the other day to help me figure out what to wear for an event and to brainstorm about a project I am working on. Handy! I couldn't imagine paying actual money for these conveniences, and less still for the ability to "vibecode" a recipe generator based on an image of my fridge. The lack of a killer consumer app, the lack of corporate uptake of OpenAI integration apart from some big consulting shops, and the complete commodification of LLMs in general make the industry pretty fake even if people can get a bit of mileage out of LLMs in their everyday lives.

I'm not even overly frugal by nature; I pay for your content!

Expand full comment
Kalen's avatar

I've sort of adopted an AI mantra of sorts to smooth out the high and lows of, as a technically-minded person, being interested in and every once in a great while aided by LLM type things, and horrified/frustrated at the centrality of the discussion, the costs, the claims - 'it is interesting that computers can do that now!'.

Because, well, it is! There are things like making new text that is like old text in interesting ways but different in interesting ways, is interesting! It was also interesting when computers were bad at chess and then were good at chess, and when pictures made on computers in movies went from looking bad and niche to looking pretty good (and then often bad again). It was interesting the first time a computer accepted a voice command (in the '60s) and a robot car drove across the country (in the '80s) and the first time someone had weird itchy feelings about a chatbot (in the '70s). It was interesting when you could push buttons and math answers came out! Computers are interesting! Sometimes they are useful!

But also, I dunno, *they're just computers*. The world filling up with computers has not radically bent its economic trajectory from the Before Times because they mostly do dumb things, and like anything they can mostly do things that don't take a lot of new figuring and waiting for happy accidents, and maybe they do them a little better (or actually do them a little worse because they're being driven to market by enormous piles of capital that can Do Things), and there are profound incentives for the bored pile of money, with its chip foundries so expensive they need to be kept in operation like Cold War shipyards, and the piles of data the wise were saying they probably shouldn't be collecting even if the reasons seemed thin, and the simple fact of being excited and blinkered by your work, to say that it's All Over- whatever that means. The fact that 'it is interesting that computers can do that now' places them firmly in a pantheon of things that have often been not super useful, or bounded, or premature, or misunderstood, or in the end, not that interesting.

Like, the chatbots are making fewer mistakes and seeming more like a search *because they're doing search.* Summoning up a surface summary on Madison et al is interesting, but also, it can do that *because Wikipedia is sitting right there* with ten page surface summaries.

The killer app for LLMs is text transformation, full stop. That's really neat! It's neat that my mess of poorly formatted notes cut and pasted from ten places are now formatted. It's neat that something can pull the topic sentences of ten papers and put them in one paper instead of looking at them all in different places on the front page of a search. But also these things in some ways don't seem that surprising if you said 'I took all the public facing documents in the world (and a few we stole because those are so bad) and had a computer look at them a billion times.'

Expand full comment
53 more comments...

No posts