13 Comments

Really like this take. I think one important sleight of hand that's taking place in these debates is the idea of "inevitability". There's a sense in which it's a real concern: AI research is being conducted globally and improving rapidly on many fronts. There's little we can do to stop *someone* *somewhere* from using this technology to sow discord and create misinformation. We should be ready for Russian dyeepfakenistkya and all that (though I share your skepticism about how the level of actual danger there). OpenAI, though, is happy to let that inevitability be confused with the idea that this technology will inevitably upend the economy. That part's not inevitable! We could easily pass a copyright law saying that you can't sell content generated by a model that has been trained on IP you don't own. And if you actually enforced this with severe legal penalties, reputable companies wouldn't do it! Some of this stuff we could stop in its tracks, if we wanted to! (Setting aside questions of whether that's an economically, judicially, or ethically sound idea)

Another question that I think will be important for understanding how this is all going to shake out is how "democratic" this technology will be. Right now the situation seems to be that everyone is riding an exponential wave of improvement, with large companies like OpenAI and Google at the forefront, but also open source "my laptop and $300 in AWS credits" hackers quickly playing catchup. I think there are really two possible futures depending on how the technology develops:

- Improvements in AI will eventually require tens of thousands of servers at a time and millions of dollars in electricity just to train one model, and the field will be controlled by a few global tech giants with the rest of the world calling their APIs, OR

- All the barriers keep coming down, and any new feat of AI is swiftly copied and distributed by smaller companies and enthusiasts and made available to the public, warts and all

OpenAI certainly hopes it's going to be option 1, and it's the more attractive option for people who want to think about AI policy and ethics because at least we would have some central points of control. But unfortunately we don't get to pick; it'll depend on how the technology behaves at new orders of scale and whether there are significant breakthroughs in data efficiency awaiting us.

Expand full comment

This is a great point--I often to feel like "AI alignment," and the general insistence any/all non-AGI concerns are essentially subcategories of the AGI concern, is precisely this kind of sleight of hand, remanding all these different types and scales of risk to this one particular problem. Even in the context of "economic transformation" there are multiple risks, multiple potential outcomes, some more evitable than others, and "alignment" is basically irrelevant to many of them.

Expand full comment

Not to put too fine a point on it, but these motherfuckers all name their kids Duncan Idaho and then act like they've never heard of the Butlerian Jihad!

Expand full comment

reminds me of this recent post from Mills Baker, an ex Facebook design manager:

> Again, many thought we were able to influence electoral outcomes, and in some cases, even more fundamental phenomena, like “people’s beliefs” or “how we think about the world.” Yet there we were, presenting lame-ass designs to Zuck showing bigger composers, better post type variety, other ridiculous and pathetic ideas. Facebook, which many at the time said had “far too much power” to control discourse and warp reality, couldn’t persuade its users to post to Facebook.

(though you seem to assign more intentionality/responsibility to Facebook and the like in advancing the "Facebook influences people" discourse)

https://suckstosuck.substack.com/p/the-irrepressible-monkey-in-the-machine

Expand full comment

Yes, I read Mills's piece this week and thought it was really interesting! Re: Intentionality, I'm sure it ranges depending on department and salary grade... but as much as anything you need to advance *some* kind of narrative and "well, we're fucking huge, and absolutely raking in the dough, and, frankly, we have *no clue* what this thing we built does or how it works" is not confidence-inspiring, even if it is more accurate.

Expand full comment

This is a very thoughtful piece, and worth reading. But I'm going to take issue with one link and how it's used not only here but in many other places these days.

https://www.nature.com/articles/s41467-022-35576-9?utm_source=substack&utm_medium=email

This report published in Nature has been touted as showing that disinfo campaigns don't really work. I can't fault the research. But the survey tests the premise that *Russian* campaigns *on Twitter* *aimed to swing voters* in 2016. That's the wrong question on three counts.

1. Twitter was not where all the fake news was happeneing in 2016... it was Facebook. And often in enormous closed/private groups. Originating from Russia in some cases but more often just spread from Russia - the actual stories came from Americans, or Macedonians, etc.

2. Operations on Twitter were largely used to sow doubt, gloom or fakery (the current trial of "Ricky Vaughn" being an example; the fake popularity of Calexit on election night being another one) and gain the attention of both MSM and fringe media outlets. Remember at the time people were writing stories about hashtags. I was writing stories about hashtags! That's influential for sure, but it is not the same as trying to directly influence voters through social media.

3. Lots of smart people still think that the goal was to flip votes. It was not and is not. Electoral disinformation campaigns, propaganda campaigns, and other less insidious things in the same realm aim to whip up enthusiasm on your side, and decrease the turnout on the other side. It's not about changing minds, it's about getting people to convince other people to come out and vote, or (usually more directly) discouraging other people from voting. The drip-drip nature of these campaigns means their true effect is is incredibly hard to quantify. Maybe impossible.

Expand full comment

This is a totally fair criticism, and I could have phrased the sentence with the link more carefully... but I think your argument here reinforces the larger point of the post, i.e. that the effects of big tech systems (whether Facebook/social media or "A.I.") are usually long-term, deeply enmeshed with other already existing systems, and very difficult to quantify or sometimes even to "see." This doesn't make them more or less bad, of course, but it changes how we might strategize a response.

Expand full comment

Spot on, agreed. My issue is definitely more with the "anti-Russiagate" crowd. Not the diehard Trump fans, but the kind of people who should know better. Russians did try to influence the 2016 election. That's a fact. (so did Americans, and Macedonians.) What kind of effect that had, we'll never really know. Thanks for allowing me to vent here.

Expand full comment

I can't articulate it very well, but my grand thesis about why I'm pessimistic about AI, not in a doomer sense but in a "how will this effect my day to day life" sense is that however much shame sillicon valley had was a low interest rate phenomenon. Like, call it performative, but I don't think a scandal in 2023 is going to result in a Mark Zuckerberg hugging tour of America.

So I think you end up with this perfect storm for bad outcomes, where you have a laser focus on the bottom line, relatively easy to implement tools with a lot of business use cases, a gutting of the class of employee that would really push back on reckless implementations (good-hearted PMs, CX, ethics teams, etc.), a herd animal C-suite class that has gotten increasingly brain poisoned, the horrible legacy of Ajit Pai, and so on.

Expand full comment

We'll have to hope that the end of ZIRP also means the end of political fealty to Silicon Valley too--if you can no longer plausibly claim to be the key "wealth-creation engine," and are just one huge and not particularly well liked industry among many, there might be some political will to actually regulate you. Though who's kidding lol

Expand full comment

"A.I. doomerism is A.I. boosterism under a different name."

This is so good. In an environment saturated with apocalyptic rhetoric, the natural response is for everyone to say "QUICK! Go do A.I. things! Use the tools! Keep up! Adapt! Or else you'll be CRUSHED by the inevitable tidal wave of change that is even now upon us!"

Of course this is the message that the for-profit proliferators of LLMs want you to hear. This is best case scenario for them. Frightened people crowding each other to buy/learn/use every single new thing they release.

The unfortunate thing is that this all becomes sort of self-fulfilling. The more convinced everyone is that something is going to change the world, the more fuel gets poured onto the market-driven fires of innovation and production, which accelerates the change everyone is shouting about.

Still, it seems like the best impulse, as ever, is to quietly watch, preparing to take action only when necessary, and only deliberately, not as some fearful reaction to the panic du jour.

Expand full comment

The last line of this piece is absolutely perfect.

Expand full comment