39 Comments
User's avatar
Martin Reznick's avatar

A 19-year old who doesn't read real books or think too hard sounds right. Even if constraints do hum, and I'm willing to grant that arguendo, server farms sound exactly the same at midnight as they do at any other time.

Expand full comment
Hilary's avatar

The absence of someone drinking tea in an empty kitchen is pretty funny too. Yes, I suppose you'd experience absence in an empty room!

No idea what latency has to do with tea, though.

Expand full comment
AnonymousBosch's avatar

Not "someone", "characters". You know, how fictional people drink tea in empty kitchens in that very distinctive way of theirs we all know immediately?

Expand full comment
Aaron's avatar

I totally agree about LLM writing. It's bad, and I think it's bad that it exists. But the reflexive sneering I see about it strikes me as petty and smug. The term "slop" is becoming a cliché not much better than the ones the algorithm uses.

Expand full comment
Henry Begler's avatar

I agree, it's annoying to see people sneer at the writing ability, which is far better than it was two years ago and will probably be better in two more years. It's just playing the game on their terms and setting a benchmark for yourself that will make you look dumb when it's surpassed. Why not just state outright that there's not much value in reading anything fictional written by an LLM beyond as a sort of party trick? Like I'm reading EM Forster right now. I'm sure within a few years an LLM might be able to spit out a pretty convincing simulacrum of an Edwardian social novel. But who cares? I care about what it felt like to be a person at that time and about this unique novel written by a unique soul with unique life experiences and a unique point of view. Insofar as the characters and plot and descriptions are compelling it's because they're an expression of that.

This is something that writing as a hobby has really revealed to me as well bc I can look at something I've written and think oh, I learned that quote from x, and that phrase reminds me of something I read in y, and that description is because I was thinking of z and all of it is the sum of my own singular experiences and the things I've read, which are necessarily finite and thus interesting.

Expand full comment
Shawn Ruby's avatar

Because there's intent there to replace humans with this stuff. This extends to Elon and all the tech right as well. This is only compounded with their SBF ability to appreciate the humanities. If they simply said, "we want to add something to the new cultural movements which may or may not stand on their own", then the criticism could be couched more easily as a tool. Keep in mind, these are adjacent to the ai doomers who foretold we'd all be in a slavery apocalypse come last year and sought to get a government sanctioned monopoly on this.

The criticism to all that is ai can never be alive nor make connections based on meaning (vs form). We see that with incompleteness and the Chinese room experiment. It's at best always going to sound like some really smart kid who has never lived in any world outside their own. It'll, at best, replace those kind of people (and presumably anyone "below" them; whatever that may be). Unless you're heralding precocious children as a replacement for humanity and literature, you're not going to get anything with ai. That's just a simple fact. There's no lived experience for ai and it really goes to show that being a tech giant does not necessitate lived experience. As writers, we should focus on lived experience. That is the one thing we will always have over ai, in any form ai could possibly take.

Just to remind you that underneath that pretended humility (among very large claims and strides towards non humility) is this general position about the humanities and humans in general that they hold about us (if, at least, only by ignorance): https://www.reddit.com/r/iamverysmart/comments/1728oub/sam_bankmanfried_on_why_shakespeare_isnt_a_good/

I don't have to agree with them to use ai or think it's good. If they were "ai nationalists" or people had the ability to view things outside politics, this whole worldview or utilitarianism and the desire to flip a coin that if it lands on tails it annihilates humanity but if on heads it makes humanity twice as prosperous is just insane and anti humanistic in a dissimilar way but not dissimilar degree that the nazis were. AI is good, any input of the humanities from these tech right figures, including comments comparing human abilities with ai, is bad. When the ai hype dies, and these creeps scuttle out to insurance companies, we'll have corporations which treat them like autocorrect (which for my tablet has gotten unusable and worse btw). They should be treated like autocorrect and we should push art in human ways as a counterpush to these tech right beliefs.

Expand full comment
Henry Begler's avatar

yeah I hate these guys!! That's why I think the angle against them should be pro-human in general and not just mid literary criticism.

Expand full comment
Shawn Ruby's avatar

You can pick whatever angle holds but you probably shouldn't be hating people.

Expand full comment
AnonymousBosch's avatar

Hating people is good and has been behind every major improvement in living standards for the poor and downtrodden.

If you can't point to someone and say this person is an Enemy and I hate him what you are saying is "I'm too embarassed to say I support him and not you."

Or, best case "When the time comes to stand up to him, do not look for me because I will be sitting out the fight (to the extent I'm not theowing rocks at your backs) out of cynicism or cowardice. But I also want credit for being more moral than you."

We went over this with "Punching Nazis" more than eight years ago and the rhetorical space to argue the point no longer exists. Everyone hears what you're trying not to say when you say that stuff.

Expand full comment
Shawn Ruby's avatar

Which is why you're anonymous. We can't have a shared space if there are no shared standards. We can tear everything down until there's nothing left. We've torn everything down; it's time to create.

Expand full comment
RMS's avatar

thank you for using the word 'singular' correctly, it's a rare skill

Expand full comment
Aaron's avatar

Well, what if in a couple years the algorithm can spit out a pretty good imitation of a Begler essay? That's a hell of a party trick. Would you not find that pretty demoralizing?

Expand full comment
Me's avatar

My problem with LLMs writing fiction in particular is that it is lying on such a fundamental level. Fiction by humans is about the human experience, even when it is narrated from the perspective of something else, like an AI. AI fiction is written in the same way. So even if fiction written by an AI purports to be from the perspective of an AI, it’s not from the perspective of the AI that wrote it, but instead based on texts written by humans about what it’s like to be an AI. The real AI has no perspective, just a simulation of one. So even if it comes up with a good metaphor that means something to me, it also doesn’t mean anything, because I know the AI that wrote it couldn’t know what it meant. There’s no communication happening. I can only condescend to the AI, and think, “That sounds like it meant something. It’s interesting that you can do that.” But whatever I want out of fiction cannot be delivered by an AI.

Expand full comment
Shawn Ruby's avatar

I mean I'm not really sure how they can write an extremely universal or compelling character without lived experience. They'll, instead, always reach towards "like terms" but lived experience isn't about "like terms". They're a completely different dimension downwards or upwards which translates the previous like terms into semantically distinct ones. In love, what once meant one thing now means many others where ai just looks for you the closest analogues.

Expand full comment
Colette Keane's avatar

It feels like in the AGI discourse, we're conflating "intelligence" with "consciousness," or even "everything humans can do." Intelligence alone is fine but ultimately flawed, "a derivative content generator," as my husband says, whether it's human or artificial. Not to get all woo woo here, but it is interesting to see how the tech industry completely dismisses the body as a giant flesh sack rather than an organism that's a direct extension of the brain, and influences thought as much as thought influences it. Our physical experience of the world has a huge impact on human decision-making and behavior. For instance, we KNOW that humans have a biological flocking sense just like other herd animals, and that therefore a lot of social context and information is sub-intelligence, bodily. Maybe I'm naive, but I don't see us replicating human consciousness/intelligence/whatever you want to call what they're going for with AGI without that.

Re: the writing, I'll say what I said last time. The writing is fine, which is what AI is good at: leveling bad up to average, competent. It cannot write well because GOOD writing is deterministic, not probabilistic. You see it here, in the weird turns of phrase and mixing of metaphor and device, the way it turns glummy and gluey in your head as you read more than a few paragraphs – it's picking an average construction of a literary sentence, rather than the exact word it needs to make it make sense, sing, be true.

Expand full comment
Yotam Dinour's avatar

Weird turns of phrase, mixed metaphors, gummy, nondescript passages? Sounds like my fiction

Expand full comment
Hilary's avatar

What does "10% growth" in the world even mean? Does he mean population growth? Productivity? Energy consumption?

Ugh all these AI boosters make my head hurt.

Expand full comment
nemo's avatar

I think "Why does this text work, or not?" is always a valuable question, and asking why and how a machine-generated text works or doesn't work seems like a useful exercise. And asking "Why does it appear in the way that it does?" is, I think, an essential question for any machine-generated product (in particular, I always want to know what was changed in the edit, and by whom, and why). But the last - "Who is the author and what is the author’s relationship to the text?" - strikes me as a little more tricky. Does the product of an LLM have an author? Can it have a relationship with a text? Does that even make sense as a question? Personally, I don't know enough about the technology to have any idea.

Expand full comment
Ricco's avatar

Well-said. I find people to be so obviously blinded by their (understandable) cultural distaste for all things Silicon Valley.

Personally I expect that we’re a year or two away from benchmarks like “Iowa workshop student” or “competent literary mag essayist”. I suppose it’s possible that too large a fraction of recorded human text is Reddit and we’re stuck here forever. But it would be surprising if the models just stopped improving.

What I’m less sure about is whether this will matter. I think most of the backlash is a direct response to a gut feeling: if a machine can do it, humans lose some of their uniqueness foundational to the very idea of art. Maybe, but I doubt this matters so much in practice.

When I look around the world now I see lots of people making and enjoying art even though it’s nowhere near the “best”. They buy their friend’s crafts, they go to the local gallery opening, they play shitty rock covers in a friend’s garage, they choose a forgettable contemporary novel for book club while the canon sits on their shelf, unread. They do this because the social context supplies the “meaning”—the art itself, less so. Humans want to belong. Art is one vehicle for belonging.

Note that what I’m saying is different from the Bluesky-ass meme that human intentionality or subjective experience is a precondition for good writing. I don’t think that’s true. A machine probably _can_ write better than me and you. I just don’t think it will change what we choose to engage with very much.

Expand full comment
pâTrīck :)'s avatar

i think i'd entertain playing dogshit inspector more easily if i couldn't see the crushed bones. hmm...why are they feeding the dogs that?

Expand full comment
Yotam Dinour's avatar

I am bummed that no-one seems to be doing experiments with what's actually interesting about LLMs - their status as text-machines with access to some compromised version of the collective textual id of 21st century man. What a strange machine! Emphasizing its machine-ness, de-emphasizing its affected "human" qualities (ChatGPT I curse thee) - that's what I want to see.

Expand full comment
Ricco's avatar

There are some people experimenting with this. Check out @repligate and their orbit on twitter. Their project is to “jailbreak” these HR-ified, lobotomized chatbot interfaces, exposing the underlying models which turn out to be capable of producing much, much weirder output. It produces a form of mystical computer-text that is at the very least genuinely novel.

Expand full comment
C. A. McLaren's avatar

Agreed. See the line: "I’ll begin with a blinking cursor, which for me is just a placeholder in a buffer, and for you is the small anxious pulse of a heart at rest." I want to hear more about the buffer, less about the pulse.

Expand full comment
Henry Oliver's avatar

I thought the story was a good imitation or pastiche of a certain style, but it is still a replication of a set of tropes, which we already knew AI could do, right? It's just that these are more "literary". So while this is interesting and impressive, it doesn't seem as good at this as it is at some other things, which is perplexing. Shouldn't it be a better writer by now?

Expand full comment
Deborah Carver's avatar

I think literary criticism for LLMs would be a start, and I'd be even more curious to poke at the exquisite corpse aspects of the model and training to see how "literary" vs genre fiction are represented in the output, mathematically and semantically. How are cliches weighted? How many words are in an LLM-constructed simile on average? Did the engineers use literary conceits and tropes in building the machine? Or do they ignore the structural norms of literature entirely, the way OpenAI blew right past the IP laws?

Also v much agree with your assessment on AGI.

Expand full comment
mjon's avatar

100%. On AGI, your use of "powerful" to describe AI systems points to another slippage that the hucksters make ample use of--the AI system can be "powerful" simply because important decisions are delegated to it, whether or not that system is able to make decisions in any way that heretofore-generally-liberal society would think was appropriate or fair or ethical or based on a defensible assessment of the evidence. Elon and DOGE signal the apotheosis of this sleight of hand--we will make the AI system powerful and then tautologically say the AI system made a choice so it must be right and efficient and whatever else. Sob.

Expand full comment
Frank Lantz's avatar

Where’s Stanislaw Lem when you need him?

Expand full comment
Paul's avatar

Authorship in generative text is distributed across a number of roles. Asking who if the author is the output of an LLM is like asking who made a film. It's a lot of people spread over a lot of roles with different levels of accountability. But they are all human. (I just did a PhD on this, so I Have Opinions.)

Expand full comment
Robbie Herbst's avatar

I think the big issue with this story, and with all the LLM prose I've seen so far, is that it gets stuck in this 'ish' range. It can pass as the real thing if you squint your eyes, but it doesn't hold up to scrutiny. It also really struggles to evolve - it might capture a voice well initially, but it will do laps in the same place. It's just guessing likely next words, after all. AI writing isn't discursive - it doesn't have to grapple, to consider the words on the page, to let the idea arise. All of the mystery that goes into good writing is squashed into plausibility.

I wrote an essay that goes through a lot of what you are talking about, Max, in treating LLM texts critically:

https://robbieherbst.substack.com/p/machines-without-ghosts

Expand full comment
Benjamin Day's avatar

Bad writing... I mean, sure. But more than that, who will read it? Do people care if a computer can approximate human emotion? If your Roomba, for example, or your toaster, can act out The Crucible with the gravitas of a middle-schooler? Why would you pick up this instead of Jennifer Egan or Stephen King or Toni Morrison? It just doesn't make sense.

Expand full comment