Discussion about this post

User's avatar
ignag's avatar

Great essay. It's important to not tie the broad (and almost meaningless) marketing term "A.I." with "ChatBot" and "Addictive Attention Business Models." And the same for "Silicon Valley" - Silicon Valley a.k.a. tech companies are not only about chat bots and social feeds.

ChatBots, Social Media, Scrolling Content Feeds - these are types of software products that happen to incorporate AI, i.e. Large Language Models. And to the points in the essay, the outcomes aren't great. But these software products would have these same problems with or without LLMs (as they had already been incorporating other AI tech like machine learning for a long time).

LLMs are fundamentally unreliable but are great at what they are great at (learning structure & translating between English & structure, recalling from the content they've been trained on, etc).

And so where this technology will definitely be impactful is when tech companies figure out how to incorporate LLMs into software experiences behind the scenes. And there is no chatbot interface. LLMs are a next-generation way of organizing data that opens up whole new possibilities - and they are made possible by advances in cloud infrastructure where we can actually train models on all the world's knowledge.

If the business models are around automating things, knowledge discovery, creating positive sum economic value and authentically useful things for society - then it should be a great thing, just like "databases" were created in the 1970's and have been a great boon in how we organize our information.

I think this essay makes the very important point that we need to stop the 'race to the bottom' of attention-addiction optimization products, there needs to be some regulatory backstop. but we should not assume ChatBot is the final form of this tech. OpenAI doesn't yet have a business model that works, they aren't actually making any money off ChatGPT. And so it's an important warning that if they gravitate to "we'll sell ads" then that would be something we all want to rally to avoid.

Thanks as always for the insightful & thought provoking essays. And I love your term "software-bound!"

Expand full comment
Rev. Andrew Holt's avatar

That last quote block and its linked article is horrific. The clinical way they describe "this amount of sexualizing children or casually promoting racism is A-OK, but THIS amount is a little too far" ... I'd be curious to talk with the people who write these standards, see how they arrive at their conclusions, ask how they sleep at night, &c.

Expand full comment
31 more comments...

No posts