Discussion about this post

User's avatar
The Vaping Trauma Surgeon's avatar

Really like this take. I think one important sleight of hand that's taking place in these debates is the idea of "inevitability". There's a sense in which it's a real concern: AI research is being conducted globally and improving rapidly on many fronts. There's little we can do to stop *someone* *somewhere* from using this technology to sow discord and create misinformation. We should be ready for Russian dyeepfakenistkya and all that (though I share your skepticism about how the level of actual danger there). OpenAI, though, is happy to let that inevitability be confused with the idea that this technology will inevitably upend the economy. That part's not inevitable! We could easily pass a copyright law saying that you can't sell content generated by a model that has been trained on IP you don't own. And if you actually enforced this with severe legal penalties, reputable companies wouldn't do it! Some of this stuff we could stop in its tracks, if we wanted to! (Setting aside questions of whether that's an economically, judicially, or ethically sound idea)

Another question that I think will be important for understanding how this is all going to shake out is how "democratic" this technology will be. Right now the situation seems to be that everyone is riding an exponential wave of improvement, with large companies like OpenAI and Google at the forefront, but also open source "my laptop and $300 in AWS credits" hackers quickly playing catchup. I think there are really two possible futures depending on how the technology develops:

- Improvements in AI will eventually require tens of thousands of servers at a time and millions of dollars in electricity just to train one model, and the field will be controlled by a few global tech giants with the rest of the world calling their APIs, OR

- All the barriers keep coming down, and any new feat of AI is swiftly copied and distributed by smaller companies and enthusiasts and made available to the public, warts and all

OpenAI certainly hopes it's going to be option 1, and it's the more attractive option for people who want to think about AI policy and ethics because at least we would have some central points of control. But unfortunately we don't get to pick; it'll depend on how the technology behaves at new orders of scale and whether there are significant breakthroughs in data efficiency awaiting us.

Expand full comment
Jasmine Sun's avatar

reminds me of this recent post from Mills Baker, an ex Facebook design manager:

> Again, many thought we were able to influence electoral outcomes, and in some cases, even more fundamental phenomena, like “people’s beliefs” or “how we think about the world.” Yet there we were, presenting lame-ass designs to Zuck showing bigger composers, better post type variety, other ridiculous and pathetic ideas. Facebook, which many at the time said had “far too much power” to control discourse and warp reality, couldn’t persuade its users to post to Facebook.

(though you seem to assign more intentionality/responsibility to Facebook and the like in advancing the "Facebook influences people" discourse)

https://suckstosuck.substack.com/p/the-irrepressible-monkey-in-the-machine

Expand full comment
11 more comments...

No posts