I think it’s fair to say that we’re in the early stages of an “AI bubble,” both in the sense of “financial bubble” and in the sense that “AI,” as a concept cathected with various hopes, fears, projections, anxieties, ambitions, etc. is expanding like an incapacitating aneurysm in the brains of many people in the tech and media business. In January, two reputable digital publishers announced plans to incorporate AI into their work, causing some understandable anxiety among writers I know. Was this the moment we’d look back on many years from now, standing in the decrepit waiting room of an ash-covered FEMA hydration camp on the edge of the Northeastern Arid Zone, attempting to bargain with the robot in charge of the prioritization algorithm, as the beginning of the end? Were the bosses finally replacing us with AI?
The first case of “AI journalism” was spotted at the venerable tech-news and gadget website CNET, now owned by the search engine optimization-focused media conglomerate Red Ventures, which acknowledged that since at least November it had been quietly using AI to generate search engine optimization-driven articles answering questions like “What is Zelle and How Does It Work?” The setup, from what we can gather, is relatively simple: Presumably there is a list of trending searches that can be plugged into prompts -- “Write an article about [SEARCH_TOPIC] in the style of CNET,” or something -- from which a language model (it’s not clear which specific one) generates these articles.
However, while this system can produce an enormous amount of content quite quickly, it cannot ensure that the content is original, or even correct. The articles were quickly shown to be riddled with bizarre errors and extensively plagiarized, problems that were apparently less of an issue to Red Ventures executives than the possibility that the disclosure placed on some of the articles might lead Google to rank the pages lower in its search results: “Disclosing AI content is like telling the IRS you have a cash-only business,” the company’s SEO director said in an internal message leaked to the tech outlet Futurism.
A week after CNET’s admission, The Wall Street Journal reported that Buzzfeed founder and CEO Jonah Peretti -- who laid off 12 percent of his staff in December -- had announced in a memo to staff that the company would start to use “ChatGPT creator OpenAI to enhance its quizzes and personalize some content for its audiences, becoming the latest digital publisher to embrace artificial intelligence.”1 Unlike CNET’s SEO-focused gambit, Peretti’s idea is for Buzzfeed writers to use AI to help generate personalized quizzes. At The Atlantic, Damon Beres describes Peretti’s pitch:
Instead, Peretti said, AI could be used to create “endless possibilities” for personality quizzes, a popular format that he called “a driving force on the internet.” […] Peretti offered the staff examples of these bigger, better personality quizzes: Answer 7 Simple Questions and AI Will Write a Song About Your Ideal Soulmate. Have an AI Create a Secret Society for Your BFFs in 5 Easy Questions. Create a Mythical Creature to Ride. This Quiz Will Write a RomCom About You in Less Than 30 Seconds. The rom-com, Peretti noted, would be “a great thing for an entertainment sponsor … maybe before Valentine’s Day.” He demonstrated how the quiz could play out: The user—in this example, a hypothetical person named Jess—would fill out responses to questions like “Tell us an endearing flaw you have” (Jess’s answer: “I am never on time, ever”), and the AI would spit out a story that incorporated those details. Here’s part of the 250-word result. Like a lot of AI-generated text, it may remind you of reading someone else’s completed Mad Libs:
“Cher gets out of bed and calls everyone they know to gather outside while she serenades Jess with her melodic voice singing ‘Let Me Love You.’ When the song ends everyone claps, showering them with adoration, making this moment one for the books—or one to erase.
“Things take an unexpected turn when Ron Tortellini shows up—a wealthy man who previously was betrothed to Cher. As it turns out, Ron is a broke, flailing actor trying to using [sic] Cher to further his career. With this twist, our two heroines must battle these obstacles to be together against all odds—and have a fighting chance.”
Look: you can’t go wrong viewing at the CNET and Buzzfeed stories as two examples of the same overall trend of media companies attempting to reduce labor costs through the imposition of AI tools. A good rule of thumb is to start from the assumption that any story you hear about using AI in real-world settings is, beneath everything else, a story about labor automation.
But I also think there’s a fundamental difference worth tugging at between what CNET is doing with AI and what Buzzfeed is proposing to do.2 A rough but symmetrical way of putting it might be that CNET’s goal in using AI is to replace writers wholesale and Buzzfeed’s goal (at least, so Peretti claims) is to help writers -- to give them a wider set of tools to draw on. This distinction isn’t just about strategy, but also about attitudes toward AI and paths that the introduction of AI to journalism and media might take: Is this tech best used to eliminate journalists, or augment them?3
This kind of framework is comforting, to the extent that it suggests an “ethical” and an “unethical” path forward for the use of AI. But looking at the actual application of large language models in each case does not make me feel particularly optimistic about the future of AI in media. Our example of AI being used “unethically” is an extremely successful industrial-scale SEO spam business; our example of AI used “ethically” is … expensively computer-generated Mad Libs.4
The framework also rests on the somewhat dangerous assumption that AI is now (or will be soon) capable of cost-effectively either eliminating or helping journalists. Buzzfeed’s AI strategy might seem less hellish than CNET’s, but whatever charm I can wring from the excerpt of its quiz output relies almost entirely on its garbled novelty, and it’s hard to believe that it represents a lasting contribution to the journalist’s toolkit.
As for CNET, the Verge reported that its “AI system was always faster than human writers at generating stories, the company found, but editing its work took much longer than editing a real staffer’s copy.” In other words, if you want to create writing that is useful to humans, you need to invest in a significant amount of human work somewhere along the line, if not in the writing of the text, at least in the editing of it. Red Ventures, of course chose to invest neither -- but that suggests to me less that it successfully and economically eliminated journalists, and more that it shifted its business definitively from journalism into pure SEO spamming.
Apparently as a consequence, Buzzfeed’s stock nearly quadrupled in two days. Granted, we’re talking about a stock that was trading at 95 cents last Wednesday and now goes for around two bucks. But its sudden rise suggests, if nothing else, that “AI” and related terms are right where “web3” or “crypto” were a year or two ago in terms of “buzzwords that, if used strategically in announcements, memos, and public interviews, will pump up your stock price.”
I’m going to keep using some awkward conditional constructions around Buzzfeed’s “AI” plan because (1) it doesn’t seem to actually exist yet beyond a strategically leaked memo that helped pump the company’s stock and earn it a lot of news coverage for a week and (2) there is absolutely no reason for anyone to trust media-company management where jobs are concerned. But for the purposes of the intellectual exercise of this newsletter, it’s useful to take Peretti at his word.
I feel like I should stress that this elimination-vs.-augmentation framework is very rough, and even though I see it used often by intelligent AI-curious people, I am not positive it stands up to scrutiny. At the same time, as I keep saying, I also am not convinced yet that AI is genuinely capable of or cost-effective for either replacing or helping journalists, so it may not really matter.
In this sense CNET and Buzzfeed may not represent two contrasting paths -- one treacherous, one noble -- so much as two concurrent tumors to biopsy and monitor. One is almost certainly cancerous, and must be eliminated; the other seems benign, but should probably be removed if it keeps growing.
Software writers have been living with an AI helper called co-pilot for the past year or so https://github.com/features/copilot/
This feels pretty similar to the augmented version of AI for journalism: Nothing this thing produces could replace a human software writer and many of it’s suggestions are dumb. But it can help answer questions and occasionally makes suggestions that lets you skip a bunch of boilerplate.
This coming Generative Content for SEO battle is interesting to me because increasingly I think that AI and search engines are part of a broader cyclical trend where information is organized, decayed by entropy, and then organized in a new form.
Search engines like Google and community efforts like Wikipedia were incredibly useful at their apex because of how cleanly they organized information. But then as those places became central, financially important institutions, they've been corroded by all sorts of forces and gradually taken to rot (I think Wikipedia is still pretty good, but it doesn't have that Alexandria feeling anymore).
ChatGPT is impressive, but it took a ton of bespoke labor to build all those datasets (in particular the reinforcement learning component). I wonder what happens when the paint starts to chip and all that training data gets dirtied by the fingers of commerce?