I had an article in the New York Times last week under the headline “Is A.I. the Greatest Technology Ever for Making Dumb Jokes?” I encourage you to go read it on the site, because what makes it worthwhile is the elegant interactive component that the excellent designers I worked with at the Times cooked up, like this Cartesian plane on which examples of A.I. innovation are plotted against two axes:
Here’s the description from the article:
Perhaps instead we should imagine A.I. possibilities on a two-dimensional plot, where one axis runs from “machine stupidity” to “machine intelligence” and the other from “human stupidity to human intelligence.”
Scientific leaps — like physicists’ developing A.I. to control and shape plasma inside a nuclear reactor — would be in the upper right, since they rely on both human brain power and advanced machine learning.
The customer service A.I. that offered a milkshake recipe in lieu of a package tracking number slots squarely in the lower left, where human incompetence and machine unreadiness intersect. […]
The lower-right and upper-left quadrants cover most of what the public has found so engaging about new generative A.I. apps. These quadrants promise neither spiritual transcendence nor existential doom. They are often enlightening and impressive, but also funny, pointless and gleefully stupid.
They are what we might call — using the bowdlerized rendering of an unpublishable, extremely online idiom for “making dumb, purposeless jokes” — the Funposting Zone.
The basic idea of the piece was to sidestep the apocalyptic-messianic framework that tends to get placed around recent advances in “generative A.I.” in favor of looking at the actually existing uses of A.I. apps that have captured the public (as opposed, to say, investor) imagination. These uses are generally not particularly lucrative, ground-breaking, or even, necessarily, impressive. They are--and we can use the proper term here on Read Max, among mature friends--shitposts.
“Shitposting” as a concept originated on the message boards of the ‘00s, where it generally described low-quality, low-effort posts that (often intentionally) derailed discussion or drowned it out. Its meaning has expanded somewhat, to include really any kind of (as I say above) stupid, purposeless joke. It is one of the core online activities, alongside shopping, making political compasses, and googling “[celebrity name] feet.”
In general, to the extent that I still have some level of affection or hope for the internet, it lies in shitposting--the extent to which even on highly professionalized and sometimes lucrative platforms there is still space for normal people to be willfully and creatively stupid, obtuse, incomprehensible, and unproductive, and to find other people who respond to that kind of open-ended, non-professional creativity. I won’t get full 1990s culture-studies professor here and tell you that skibidi toilet is actually a political act of resistance against surveillance capitalism, or algorithmic control, or whatever, but I certainly think “gmod videos of a dystopian future world controlled by toilet beings have become extremely popular” is a suggestion that there is still some level of unexpected creative life even in a stultifying era.
The strength and vibrancy of A.I. shitposting, then, is the thing that allows me to remain A.I.-curious even as Sam Altman and his peers do their best to convince me otherwise. As I note in the piece, sometimes the mood around generative A.I. recalls early days on new social networks--at least, the way those used to feel, before certain roles, behaviors, and expectations had been established and calcified--when a huge portion of the activity was experimental shitposting. This is the other reason to honor shitposting--not just that it represents nonprofessional, non-useful, non-aspirational human creativity, but that it’s way of exploring the limits and capabilities of a system, whether a social-media platform or an A.I. app. I have loved Janelle Shane’s “A.I. Weirdness” blog for a long time for exactly this reason--it’s much more engaging (and “generative” in a few senses) to “break” large language models in funny, social ways like inventing new colors than it is to test them on difficult problems of logic and memory, even if the latter is better for scientific-method reasons.
Unfortunately, as Shane observes, her particular vein is drying up, not just as the models get “better” but as their makers get more and more nervous about blowback (which threatens their power and some presumed payday) and limit responses. This is also a familiar dynamic: As social networks become businesses, and also mini-economies of their own, the shitposting tends to become anemically professional. What was once creative and experimental becomes rote and expected when real money is suddenly at stake.
One of more abstract loss edits I've seen
it is sad that professionalism just slowly kills all the fun we have on internet