What Facebook criticism can teach us about A.I. criticism
The danger of accepting an industry's terms for itself
I’ve found myself a little bit annoyed lately by some of the claims I’ve seen around the alleged “misinformation” threat posed by the latest generation of generative A.I. applications like Midjourney and ChatGPT. The A.I. researcher and critic Gary Marcus (to whom I’m generally very sympathetic) recently published in his newsletter an example of a jailbroken Bing chatbot spewing a Qanon narrative with fake “references,” and cautioned that the “potential for automatically generating misinformation at scale is only getting worse.” In the hothouse of Twitter, the claims and the warnings are even stronger, if somewhat vaguer:
This kind of response is obviously silly; we know quite well that you don’t need Midjourney or GPT-4 to create doctored photos or effective propaganda, and, even if generative A.I. might make such production slightly more efficient, as Sayash Kapoor and Arvind Narayanan write, citing Seth Lazar,1 “the cost of producing lies is not the limiting factor in influence operations."
But beyond the basic point that generative A.I. doesn’t really change the economic or practical structures of misinformation campaigns, we know that deepfakes and “fake news”--whether or not they’re generated by LLMs--are not, in and of themselves, the cause or source of “misinformation” on a politically relevant scale, and moreover that misinformation campaigns or operations were largely inefficient failures on their own terms, and that misinformation on social media in general was far from the most important factor in the global rise of political instability, right-wing reaction, acceptable bigotry, etc. We know all this because we’ve spent a lot of the past decade thinking, observing, and arguing about Facebook (and its similar peers), and there are lessons to be gleaned from this knowledge as new and increasingly advanced generative A.I. apps are deployed across the internet.
Obviously the lesson is not “don’t sweat it.” I understand the sense of anxiety being articulated by these warnings about misinformation; I am still personally trying to figure out how I feel about “A.I.,” a task made no less difficult by what often feels like an astounding pace of change. Google has now introduced its own chat app, Bard, to contend with Microsoft’s Bing/Sydney; these megaplatforms enter the public A.I. arms race against a background of dramatic visible progress in the quality of output of the major generative A.I. apps, e.g.:
The weekly introduction of new chatbots and generative A.I. applications and the obvious recent velocity of advances in the capabilities of the large language models that power them--and its immediate visibility to the rest of us in the form of Midjourney shitposts on Twitter--has helped imbue every conversation or prediction about “A.I.” that I have read in newspapers (or on Twitter or in Discord) over the last few months with a sense of existential urgency, if not desperation, a vibe Ezra Klein articulated well in his recent Times column:
“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.” […]
I find myself thinking back to the early days of Covid. There were weeks when it was clear that lockdowns were coming, that the world was tilting into crisis, and yet normalcy reigned, and you sounded like a loon telling your family to stock up on toilet paper. There was the difficulty of living in exponential time, the impossible task of speeding policy and social change to match the rate of viral replication. I suspect that some of the political and social damage we still carry from the pandemic reflects that impossible acceleration. There is a natural pace to human deliberation. A lot breaks when we are denied the luxury of time.
But that is the kind of moment I believe we are in now. We do not have the luxury of moving this slowly in response, at least not if the technology is going to move this fast.
I agree wholeheartedly with Klein’s conclusion, which is that “we cannot […] put these systems out of our mind, mistaking the feeling of normalcy for the fact of it.” But my immediate (and possibly very unwise!) instinct is to resist the sense of urgency, which I think is being imposed on us by the companies and people developing the “A.I.” systems to which we should be paying close attention.
“Imposed,” of course, in the obvious sense that the technology is not developing itself: It’s moving “this fast” because of decisions made by A.I. companies, in particular OpenAI, whose research and release schedule manufactures the “urgency.” But “imposed” also in the sense that this is exactly how OpenAI would like us to talk about it, i.e. as a world-historically important technology, just months away from inevitable total global transformation. A.I. doomerism is A.I. boosterism under a different name.
That this urgent A.I. millenarianism emerges from the same group of people who are developing “A.I.” is sometimes treated as a puzzle or contradiction. As Klein writes, “I often ask [A.I. doomer researchers] the same question: If you think calamity so possible, why do this at all? […] A tempting thought, at this moment, might be: These people are nuts.” But the dynamic is familiar to anyone who followed the mainstream discourse about “disinformation” and Facebook as it evolved in the years after 2016. As Joe Bernstein’s excellent 2021 Harper’s article on the subject explains, Facebook’s entire business proposition prevented it from dismissing the disinformation panic; it was better to cop to wrongdoing than to admit powerlessness:
Compared with other, more literally toxic corporate giants, those in the tech industry have been rather quick to concede the role they played in corrupting the allegedly pure stream of American reality. […] Facebook’s basic business pitch made denial impossible. Zuckerberg’s company profits by convincing advertisers that it can standardize its audience for commercial persuasion. How could it simultaneously claim that people aren’t persuaded by its content? Ironically, it turned out that the big social-media platforms shared a foundational premise with their strongest critics in the disinformation field: that platforms have a unique power to influence users, in profound and measurable ways. Over the past five years, these critics helped shatter Silicon Valley’s myth of civic benevolence, while burnishing its image as the ultra-rational overseer of a consumerist future.
By the same token, if you are trying to sell A.I. systems (or secure funding for research), it’s better to predict total imminent A.I. apocalypse than it is to shrug your shoulders and say you don’t really know what effects A.I. will have on the world, but that those effects will probably be complicated and inconclusive, occur over a long timeline, and depend to a large degree on social, political, and economic conditions out of any one A.I. company’s control. Tweeting “It’s so over” is more likely to go viral than tweeting “It’s always already happening and will continue to do so forever.”
I recognize that to some extent this sounds like splitting hairs. Things are bad and might be getting worse! Who cares if we’re correct about the the precise nature of A.I. risk and its possible outcomes, so long as we’re doing something to blunt its effects? But this is the exact problem with “urgency,” and precisely what we should learn from a decade or so of Facebook criticism: Facebook has obviously played a significant role in the political developments of the past decade, but as Bernstein documents, misapprehension about the nature of its role--a misapprehension encouraged by Facebook!--has directed an enormous amount of attention, energy, and resources away from anything like a realistic or achievable “solution” to the problems the company poses, not to mention the problems that the company simply exacerbates.
It would be good, I think, to recognize that Facebook both shaped political and economic conditions and was shaped by them itself. Would Facebook have been the same had its first decade as a public company not been marked by high unemployment, low interest rates, a soaring stock market, and a political establishment reliant on the tech and financial sectors as the key engines for economic growth?2 When we accept A.I. developers' own framing of their products as (1) inevitable and (2) politically and economically transformative, it becomes easy to elide the obvious fact that the forms A.I. takes (i.e., as chatbots! As "search engines"!) and the uses to which it is put (i.e., the jobs it will augment or replace! The tasks it will make easier or harder!) are contingent on the political and economic conditions in which it emerges.3
As an example of what I mean, take the legendary sci-fi magazine Clarkesworld, which last month suspended its open submissions after being overwhelmed with a deluge of A.I.-generated short stories. This is obviously a concerning development for independent publishers, but it’s not an inevitable consequence of widespread A.I. access: It’s a direct result of generative A.I. being deployed into a world in which popular TikTok and YouTube hustlers are touting A.I.-based get-rich-quick schemes like “generate sci-fi stories to submit to Clarkesworld.” Clarke has since successfully reopened submissions; in his most recent editor’s note he writes that the real threat to Clarkesworld’s existence is the capriciousness of Amazon deciding to end its Kindle subscription program.
The ongoing story of Clarkesworld suggests to me that A.I. is neither a wholly and immediately transformative technology, set to snuff science-fiction writers and publishers out of existence within months, nor an unimportant bust that will disappear when the hype dies down. Instead it’s, to put it in the most direct terms possible, another thing to deal with4 whose importance lies mostly in how it interacts with all the other things we have to deal with.
I’m open to the possibility that we rest on the edge of a precipice--that a world “unrecognizably transformed” by large language models is only a matter of months away, as Paul Christiano seems to believe. But a basic rule of thumb of this newsletter is that things change slowly and stupidly rather than quickly and dramatically, and a proper A.I. criticism needs to account for this likelihood. For now, I am filled with resentment to find myself once again in the midst of a discourse about technology in which the terms and frameworks for discussion have been more or less entirely set by the private companies that stand to profit off of its development and adoption.
I don’t agree with all of it, but at least on an analytic (and tonal) level I think the Lazar lecture from which these notes are taken is, I think, an excellent example of resisting breathless urgency in A.I. discourse without on the other hand dismissing the real and important risks of widely available generative A.I. and large language models.
Consider a thought experiment: If you were sent back in time to 2003 Harvard with a special, legally entirely non-threatening “mind-changing gun” and a single, explicitly non-lethal “persuasion bullet” and a mission to prevent Donald Trump from being elected president, do you go to Mark Zuckerberg’s dorm room and convince him not to start Facebook, or do you go to Harvard President Larry Summers’s office and gently, and, again, to be clear, non-violently persuade him that his preferred, inadequate and misguided response to the coming global financial crisis would hurt millions of Americans, further erode trust in political institutions, and usher into being the conditions in which Trump might be elected?
Of course, urgency and ameliorative reform have a constituency beyond A.I. investors and developers, just as anti-“disinformation” efforts provide a convenient way for elites to avoid confronting the many contradictions and weaknesses of the system from which they benefit. As Bernstein writers in Harper’s:
Indeed, it’s possible that the Establishment needs the theater of social-media persuasion to build a political world that still makes sense, to explain Brexit and Trump and the loss of faith in the decaying institutions of the West.
Easier to propose an inadequate UBI and shrug your shoulders at the “inevitability” of A.I.-driven job loss than it is to address the balance of power between workers and employers or the wisdom of private ownership.
I feel comfortable with this description of A.I. (“another thing to deal with, along with all the other things”), even though it lacks the viral frisson of “shoggoth with a smiley face mask” or the other, cutesy-dramatic ways of describing large language models, because I feel confident it is not how OpenAI would like its products to be described.
Really like this take. I think one important sleight of hand that's taking place in these debates is the idea of "inevitability". There's a sense in which it's a real concern: AI research is being conducted globally and improving rapidly on many fronts. There's little we can do to stop *someone* *somewhere* from using this technology to sow discord and create misinformation. We should be ready for Russian dyeepfakenistkya and all that (though I share your skepticism about how the level of actual danger there). OpenAI, though, is happy to let that inevitability be confused with the idea that this technology will inevitably upend the economy. That part's not inevitable! We could easily pass a copyright law saying that you can't sell content generated by a model that has been trained on IP you don't own. And if you actually enforced this with severe legal penalties, reputable companies wouldn't do it! Some of this stuff we could stop in its tracks, if we wanted to! (Setting aside questions of whether that's an economically, judicially, or ethically sound idea)
Another question that I think will be important for understanding how this is all going to shake out is how "democratic" this technology will be. Right now the situation seems to be that everyone is riding an exponential wave of improvement, with large companies like OpenAI and Google at the forefront, but also open source "my laptop and $300 in AWS credits" hackers quickly playing catchup. I think there are really two possible futures depending on how the technology develops:
- Improvements in AI will eventually require tens of thousands of servers at a time and millions of dollars in electricity just to train one model, and the field will be controlled by a few global tech giants with the rest of the world calling their APIs, OR
- All the barriers keep coming down, and any new feat of AI is swiftly copied and distributed by smaller companies and enthusiasts and made available to the public, warts and all
OpenAI certainly hopes it's going to be option 1, and it's the more attractive option for people who want to think about AI policy and ethics because at least we would have some central points of control. But unfortunately we don't get to pick; it'll depend on how the technology behaves at new orders of scale and whether there are significant breakthroughs in data efficiency awaiting us.
reminds me of this recent post from Mills Baker, an ex Facebook design manager:
> Again, many thought we were able to influence electoral outcomes, and in some cases, even more fundamental phenomena, like “people’s beliefs” or “how we think about the world.” Yet there we were, presenting lame-ass designs to Zuck showing bigger composers, better post type variety, other ridiculous and pathetic ideas. Facebook, which many at the time said had “far too much power” to control discourse and warp reality, couldn’t persuade its users to post to Facebook.
(though you seem to assign more intentionality/responsibility to Facebook and the like in advancing the "Facebook influences people" discourse)
https://suckstosuck.substack.com/p/the-irrepressible-monkey-in-the-machine