Discussion about this post

User's avatar
Samuel Garfield's avatar

> One well-rehearsed account blames A.I.’s stylistically uninteresting output on next-token prediction: Large language models, this argument goes, intrinsically cannot generate truly great writing, or truly creative writing, because they’re always following paths of less resistance, and regurgitating the most familiar and most probable formulations.

Worth pointing out that this is not necessarily true. One of the key parameters you can control at inference is “temperature”—ie. how much the output can randomly diverge from the most likely next token. All output from consumer chatbots has a non-zero temperature so it’s not strictly the lowest common denominator.

An interesting literary experiment that no one has tried to my knowledge is seeing how different temperatures produce creative writing. I bet there’s a sweet spot with a higher temperature than what’s exposed to end users in chat interfaces (where you want to keep the likelihood of going off the rails and hallucinating lower).

Nina Eichacker's avatar

I really love this piece, and think you’ve done a really good job teasing out the demand and supply side aspects of writing and AI writing. I just finished AS Hamrah’s The Algorithm of the Night, which talks a lot about the supply problems he identifies in the film industry, and this interplay of demand and supply of “schlock” vs “art” vs anything in between is front of mind for me right now. Thank you!

7 more comments...

No posts

Ready for more?