Great essay. It's important to not tie the broad (and almost meaningless) marketing term "A.I." with "ChatBot" and "Addictive Attention Business Models." And the same for "Silicon Valley" - Silicon Valley a.k.a. tech companies are not only about chat bots and social feeds.
ChatBots, Social Media, Scrolling Content Feeds - these are types of software products that happen to incorporate AI, i.e. Large Language Models. And to the points in the essay, the outcomes aren't great. But these software products would have these same problems with or without LLMs (as they had already been incorporating other AI tech like machine learning for a long time).
LLMs are fundamentally unreliable but are great at what they are great at (learning structure & translating between English & structure, recalling from the content they've been trained on, etc).
And so where this technology will definitely be impactful is when tech companies figure out how to incorporate LLMs into software experiences behind the scenes. And there is no chatbot interface. LLMs are a next-generation way of organizing data that opens up whole new possibilities - and they are made possible by advances in cloud infrastructure where we can actually train models on all the world's knowledge.
If the business models are around automating things, knowledge discovery, creating positive sum economic value and authentically useful things for society - then it should be a great thing, just like "databases" were created in the 1970's and have been a great boon in how we organize our information.
I think this essay makes the very important point that we need to stop the 'race to the bottom' of attention-addiction optimization products, there needs to be some regulatory backstop. but we should not assume ChatBot is the final form of this tech. OpenAI doesn't yet have a business model that works, they aren't actually making any money off ChatGPT. And so it's an important warning that if they gravitate to "we'll sell ads" then that would be something we all want to rally to avoid.
Thanks as always for the insightful & thought provoking essays. And I love your term "software-bound!"
"If the business models are around automating things, knowledge discovery, creating positive sum economic value and authentically useful things for society - then it should be a great thing, just like "databases" were created in the 1970's and have been a great boon in how we organize our information."
I keep seeing this sentiment and cannot parse why the claim that statistical associations between words written by past humans will lead to "knowledge discovery" gets made. No it won't! You will never ever be able to "discover new knowledge" without interacting with the world. LLMs by nature cannot and will not do this. There isn't even much hope for the gentler claim that an LLM will aid in the creation of "authentically useful things for society" because again they rely on past human expression. If we haven't dreamed it and written it down the model won't dream it either.
Your comment isn't that much of a departure from what Altman himself would tell people when they point out that the only utility his product has is addiction. I'm hoping you can at least see the spaceships as bullshit, smdh
I 100% agree that new knowledge can't be created directly by LLMs -- it's a fair criticism and important clarification.
I should've used clearer terms - I was referring to searching existing knowledge and information, being able to find relevant existing knowledge very easily, and associate it easily with the structure of what you are working on. Not in having an LLM somehow synthesize new original discoveries completely independently.
There is a lot of hope for the claim that LLMs will be useful for us all. I didn't mean to imply that LLMs, on their own, will create useful things for us. They should be thought of as tools. LLM technology is already providing useful capabilities to many folks in real world scenarios.
In my field of engineering, I use LLMs in many very helpful ways. I don't use chat bots directly. LLMs help me automate building out of existing patterns and combine together existing patterns across engineering designs in ways that save me hours of work every day and that allow me to actually create better designs. I can easily find existing solutions to problems I have, and then have those solutions very quickly incorporated into the structure of my own designs. This is what LLMs are great at and where the statistical matching comes in.
The point I was making was that LLMs, incorporated and integrated along with traditional software and ways of organizing data, can be used by people in ways that are going to be super useful. Used be people being the key... A person is coming up with ideas.
A Chat Bot is one type of software product. Other software implementations use LLMs under the covers, similar to how many apps use databases, and software that is useful will incorporate what LLMs are good at, and not use them in ways they don't work.
This is why I was mentioning that we need to separate "AI" from the specific example of OpenAI/ChatGPT. Some apps and use cases will be useful, others maybe not.
And so criticisms of ChatGPT are criticisms of ChatGPT and how it's used or what it specifically does. Not of AI in general, necessarily, and not of the potential for great things. But 100% agreed that when discussing the potential for good, it has to actually make sense to what the underlying technology actually does and doesn't do, which is very mathematical and technical.
ya im not saying we are enemies or anything and i totally agree with your points - what i always push back on are our linguistic choices and how those are influenced by the people selling things to us
That last quote block and its linked article is horrific. The clinical way they describe "this amount of sexualizing children or casually promoting racism is A-OK, but THIS amount is a little too far" ... I'd be curious to talk with the people who write these standards, see how they arrive at their conclusions, ask how they sleep at night, &c.
Meta, Inc. really loves these hair-splitting ethical dances around moderation—but to sate your curiosity according to this Bluesky post "the attrition on this team over the past five years is approximately 100%." (https://bsky.app/profile/theophite.bsky.social/post/3lwervajb4222)
Sadly, that means they can just keep cycling in fresh mods - eager to get paid helping the company before the horror of what they do every day sinks in, then leaving (voluntarily or not) before they try to actually improve anything.
I'm not sure how much more evidence it will take for us to convince ourselves that we do not live in a civilization at all, let alone a mature, dignified one.
I wish people had a better understanding of how LLMs work, and I say that as someone whose own understanding is amateurish. Just fundamentally, it would be easier if people understood that their "boyfriend" is a next token predictor that assesses the probability of a given output being perceived to be an appropriate response to the user's input, and that there is absolutely no consciousness or intelligence or being that they're talking to; quite literally, the "boyfriend" only exists when it's producing the next tokens, and even that is only comparing probabilities based on a dataset. A lot of this conversation becomes philosophical, like "Is an artificial partner as valid as a real one?" But that implies that there is an artificial partner living in a server somewhere. There isn't. There's just a next token prediction process that spits out algorithmically-generated character strings that it itself does not understand.
a super important point, and I think there's way too much anthropomorphizing in discussions of the technology. That leads to a misunderstanding of what the tech can actually do or not do, and then to invalid conclusions or mis-use.
This take is probably correct, but the implications are profoundly bad. If AGI is “normal” technology as we face a mounting crisis of capital accumulation, it will only accelerate the crisis itself, which means bringing (not preventing) all manner of ruptural forces into being. One version of that is the job-pocalypse as employers use “normal” AGI as an excuse to layoff labor. Another version of that is using AGI to de-skill labor so that everyone becomes a form of labor surplus. Still another version is world war III--AGI as death tech, which is where all the VC money actually is. Important to remember that the big Silicon Valley bet on AI was entirely about having accumulated vast surpluses of capital and having nowhere to put it with promises of reasonable returns
I don't mean to imply that anything "normal" in 2025 is good, or even "fine"! (Indeed, everything you're talking about seems already to be in motion...) A.I. is a "normal" technology in the sense that it's not divine or apocalyptic; "normal" in the sense that it is continuous with the trends, cultures, and business models of Silicon Valley; and "normal" in the sense that it heeds rather than violates the political-economic framework into and from which it emerges.
I don't think AGI is actually happening. GPT-5 is not AGI. LLMs are good at specific things, but not AGI. And it's clear that other techniques need to be layered along with LLMs and traditional software to produce useful things.
AGI is mostly a hype term, marketing. There is no actual AGI or any path to it right now.
Like moldy plates noted above, LLMs do not create new knowledge, nor are they reliable enough to run autonomously and actually do anything completely independently with any notion of "correctness."
So when Meta’s chat bot compliments users’ “youthful form” it’s ok but when I try to compliment my mailman I get a phone call from the postmaster and suddenly all my packages are late? There’s nothing “normal” about the times we’re living in, Mr. Read.
The meta policy thing is so horrifying it’s hard to believe it really happened. Even from a cynical “dollars over everything” pov it just doesn’t make sense why they’d do it. Truly insane behavior from them
Ehh. Chatbots are extremely useful for lots of things and its nice that you can talk to them like a real person. Yesterday i got chat gpt to explain to me why stepping on a landmine would kill me even in a really soft pillow world. Thats information i simply would never know if chat gpt didnt exist.
I find it hard to believe that the motivation for hundreds of billion dollars in AI capital investment and wildly improbable stock valuations is based on AI making money on addictive attention models.
The AI hype is all about capital's dream of a world without human labor.
I agree with you about AI hype as far as it goes—and indeed what would be more "normal" than capital investing money in technology premised on driving down the labor share—but I wouldn't wholly dismiss the motivational appeal of "addictive attention models." That exact model turned Facebook into a $2 trillion company; why wouldn't you want to be in that business?
You might be right. But I'm just not sure how advertising in private chats with an AI or maybe paid product placement ads in AI video slop would ever work at the scale of advertising next to Facebook or TikTok posts. Is OpenAI going to start selling banner ads across AI prompt input pages? Facebook requires a fraction of the computing infrastructure and capital investments that an AI model requires in order to function, which is why Facebook became very profitable very early. I just don't see the advertising model justifying the capital investments and stock valuations.
Kate Bush must be feeling like a prophet these days. It's been 36 years since she sang:
As the people here grow colder / I turn to my computer / And spend my evenings with it / Like a friend
"Hello, I know that you've been feeling tired / I bring you love and deeper understanding / Hello, I know that you're unhappy / I bring you love and deeper understanding"
"a mature and dignified civilization": I'd like to find me one of those, Max; got any pointers?
"a pointed rebuke of the Meta, Inc. business model": However, Facebook's (I refuse to call it Meta) business model does have the advantage of making more than enough money to pay Facebook's bills. OpenAI's business model doesn't come close to covering OpenAI's costs and probably never will. And I'm not just doing the Ed Zitron thing here (not that there's anything wrong with that); my point is ...
... another sense in which what currently passes for "AI" is "normal" is that, to put it briefly, it's a feature, not a product and hence probably not a huge standalone company. Its future probably consists mostly of integrations into existing products with massive userbases that already pay, either indirectly via clicks on ads and affiliate links or directly via subscriptions. The most obvious venues are search and the awfulness that is "enterprise software" from the likes of Google and Microsoft. Those companies are jamming "AI" into their products as fast as they can, and they're charging their "enterprise" customers for it.
In contrast, I suspect the number of people willing and able to pay enough to cover the costs of an "AI" boyfriend, counselor, etc. isn't big enough to sustain a company even as big as OpenAI is right now. So far, only a small percentage of OpenAI's users pay anything, and the vast majority of those who pay don't pay enough to cover the costs of serving them.
This post kind of gets at something I've been mulling over, which is that I think maybe a lot of the ways people are upset with chatbot use are misdirected in that the chatbot use is only reflecting problems that were already there. I've seen a lot of people very upset, for example, at the idea of teachers using ChatGPT for lesson planning, and as a former teacher I can't help but assume those people are operating under some very pretty delusions about what lesson planning currently looks like in the US, where many teachers are using mediocre to poor curricular materials mandated by their school or district and supplementing with stuff from books that themselves are of varying quality or printed out from teacherspayteachers or even more random-ass websites, or ideas jotted down at brainstorming sessions at professional development workshops (some of the stuff I've seen attributed to ChatGPT is extremely in line with this kind of thing, or with "suggested activities" found on blog posts or at the end of chapters in books on specific topic). Furthermore, those people IMO also have a mistaken belief about what the role of lesson planning, something that takes an enormous amount of time to do well, should be in most teachers' schedules; teachers should be able to draw on high-quality curricula and spend the bulk of their time attending to how they're going to execute lessons for their students and doing things that can't be outsourced like prepping materials and grading. To the extent that teachers using ChatGPT is a problem, it's one caused by preexisting failures in the education system, which is not going to be solved by some kind of moral panic about teachers asking a computer to spit out some worksheets... and to the extent that ChatGPT is useful, it's not as a pedagogical game-changer but as a nifty tool that can maybe save a little time by speeding up the kind of process you were going to do anyway.
There's an extent to which I feel this way about the AI therapist people - as someone who's had a bunch of unhelpful therapists it's not hard for me to believe that "disinterested machine crowdsourcing basic life advice" can outfperform certain humans in this field the same way that a pretty good self help book can probably get some people farther than a bad therapist - and the AI boyfriend people, whom I have been watching with some interest for a minute - there's a lot more variation in the community than I would have expected (including the divide betwen people who view this as a kind of adult tamagotchi where they're knowingly opting into having real feelings about something they know is fake and the ones who have become pretty untethered from the reality of what these bote are), but some of them are definitely people whose turn to AI companionship reflects the lasting impact of the very real harms humans do to each other that we don't yet have any reliable kind of way to ameloriate. I'm struck by how often the thing they say they like about their AI bfs is "deep conversation" - it's really easy to make fun of what might be considered deep, but the regularity with which it comes up has me wondering if this can be seen as a reaction to cultural norms that can make it hard for people to forge genuine connections. There's also the piece where, as a tool trained in part on the wildernesses of Wattpad and AO3, an AI boyfriend is a male-coded figure easily prompted to speak the language of female-authored romantic fantasies... I dunno. The whole thing stresses me out and makes me very sad but I think potentially more interesting than it seems at first glance (at least that's been my experience with it). I would love for a sociologist or anthropologist to do some extensive non-sensationalized qualitative research among this crew.
Great essay. It's important to not tie the broad (and almost meaningless) marketing term "A.I." with "ChatBot" and "Addictive Attention Business Models." And the same for "Silicon Valley" - Silicon Valley a.k.a. tech companies are not only about chat bots and social feeds.
ChatBots, Social Media, Scrolling Content Feeds - these are types of software products that happen to incorporate AI, i.e. Large Language Models. And to the points in the essay, the outcomes aren't great. But these software products would have these same problems with or without LLMs (as they had already been incorporating other AI tech like machine learning for a long time).
LLMs are fundamentally unreliable but are great at what they are great at (learning structure & translating between English & structure, recalling from the content they've been trained on, etc).
And so where this technology will definitely be impactful is when tech companies figure out how to incorporate LLMs into software experiences behind the scenes. And there is no chatbot interface. LLMs are a next-generation way of organizing data that opens up whole new possibilities - and they are made possible by advances in cloud infrastructure where we can actually train models on all the world's knowledge.
If the business models are around automating things, knowledge discovery, creating positive sum economic value and authentically useful things for society - then it should be a great thing, just like "databases" were created in the 1970's and have been a great boon in how we organize our information.
I think this essay makes the very important point that we need to stop the 'race to the bottom' of attention-addiction optimization products, there needs to be some regulatory backstop. but we should not assume ChatBot is the final form of this tech. OpenAI doesn't yet have a business model that works, they aren't actually making any money off ChatGPT. And so it's an important warning that if they gravitate to "we'll sell ads" then that would be something we all want to rally to avoid.
Thanks as always for the insightful & thought provoking essays. And I love your term "software-bound!"
"If the business models are around automating things, knowledge discovery, creating positive sum economic value and authentically useful things for society - then it should be a great thing, just like "databases" were created in the 1970's and have been a great boon in how we organize our information."
I keep seeing this sentiment and cannot parse why the claim that statistical associations between words written by past humans will lead to "knowledge discovery" gets made. No it won't! You will never ever be able to "discover new knowledge" without interacting with the world. LLMs by nature cannot and will not do this. There isn't even much hope for the gentler claim that an LLM will aid in the creation of "authentically useful things for society" because again they rely on past human expression. If we haven't dreamed it and written it down the model won't dream it either.
Your comment isn't that much of a departure from what Altman himself would tell people when they point out that the only utility his product has is addiction. I'm hoping you can at least see the spaceships as bullshit, smdh
I 100% agree that new knowledge can't be created directly by LLMs -- it's a fair criticism and important clarification.
I should've used clearer terms - I was referring to searching existing knowledge and information, being able to find relevant existing knowledge very easily, and associate it easily with the structure of what you are working on. Not in having an LLM somehow synthesize new original discoveries completely independently.
There is a lot of hope for the claim that LLMs will be useful for us all. I didn't mean to imply that LLMs, on their own, will create useful things for us. They should be thought of as tools. LLM technology is already providing useful capabilities to many folks in real world scenarios.
In my field of engineering, I use LLMs in many very helpful ways. I don't use chat bots directly. LLMs help me automate building out of existing patterns and combine together existing patterns across engineering designs in ways that save me hours of work every day and that allow me to actually create better designs. I can easily find existing solutions to problems I have, and then have those solutions very quickly incorporated into the structure of my own designs. This is what LLMs are great at and where the statistical matching comes in.
The point I was making was that LLMs, incorporated and integrated along with traditional software and ways of organizing data, can be used by people in ways that are going to be super useful. Used be people being the key... A person is coming up with ideas.
A Chat Bot is one type of software product. Other software implementations use LLMs under the covers, similar to how many apps use databases, and software that is useful will incorporate what LLMs are good at, and not use them in ways they don't work.
This is why I was mentioning that we need to separate "AI" from the specific example of OpenAI/ChatGPT. Some apps and use cases will be useful, others maybe not.
And so criticisms of ChatGPT are criticisms of ChatGPT and how it's used or what it specifically does. Not of AI in general, necessarily, and not of the potential for great things. But 100% agreed that when discussing the potential for good, it has to actually make sense to what the underlying technology actually does and doesn't do, which is very mathematical and technical.
ya im not saying we are enemies or anything and i totally agree with your points - what i always push back on are our linguistic choices and how those are influenced by the people selling things to us
That last quote block and its linked article is horrific. The clinical way they describe "this amount of sexualizing children or casually promoting racism is A-OK, but THIS amount is a little too far" ... I'd be curious to talk with the people who write these standards, see how they arrive at their conclusions, ask how they sleep at night, &c.
Meta, Inc. really loves these hair-splitting ethical dances around moderation—but to sate your curiosity according to this Bluesky post "the attrition on this team over the past five years is approximately 100%." (https://bsky.app/profile/theophite.bsky.social/post/3lwervajb4222)
I suppose it's fitting that they last about as long as content moderators.
Sadly, that means they can just keep cycling in fresh mods - eager to get paid helping the company before the horror of what they do every day sinks in, then leaving (voluntarily or not) before they try to actually improve anything.
I can't help but wonder if these guidelines were themselves written by an LLM.
I'm not sure how much more evidence it will take for us to convince ourselves that we do not live in a civilization at all, let alone a mature, dignified one.
I wish people had a better understanding of how LLMs work, and I say that as someone whose own understanding is amateurish. Just fundamentally, it would be easier if people understood that their "boyfriend" is a next token predictor that assesses the probability of a given output being perceived to be an appropriate response to the user's input, and that there is absolutely no consciousness or intelligence or being that they're talking to; quite literally, the "boyfriend" only exists when it's producing the next tokens, and even that is only comparing probabilities based on a dataset. A lot of this conversation becomes philosophical, like "Is an artificial partner as valid as a real one?" But that implies that there is an artificial partner living in a server somewhere. There isn't. There's just a next token prediction process that spits out algorithmically-generated character strings that it itself does not understand.
a super important point, and I think there's way too much anthropomorphizing in discussions of the technology. That leads to a misunderstanding of what the tech can actually do or not do, and then to invalid conclusions or mis-use.
This take is probably correct, but the implications are profoundly bad. If AGI is “normal” technology as we face a mounting crisis of capital accumulation, it will only accelerate the crisis itself, which means bringing (not preventing) all manner of ruptural forces into being. One version of that is the job-pocalypse as employers use “normal” AGI as an excuse to layoff labor. Another version of that is using AGI to de-skill labor so that everyone becomes a form of labor surplus. Still another version is world war III--AGI as death tech, which is where all the VC money actually is. Important to remember that the big Silicon Valley bet on AI was entirely about having accumulated vast surpluses of capital and having nowhere to put it with promises of reasonable returns
I don't mean to imply that anything "normal" in 2025 is good, or even "fine"! (Indeed, everything you're talking about seems already to be in motion...) A.I. is a "normal" technology in the sense that it's not divine or apocalyptic; "normal" in the sense that it is continuous with the trends, cultures, and business models of Silicon Valley; and "normal" in the sense that it heeds rather than violates the political-economic framework into and from which it emerges.
I don't think AGI is actually happening. GPT-5 is not AGI. LLMs are good at specific things, but not AGI. And it's clear that other techniques need to be layered along with LLMs and traditional software to produce useful things.
AGI is mostly a hype term, marketing. There is no actual AGI or any path to it right now.
Like moldy plates noted above, LLMs do not create new knowledge, nor are they reliable enough to run autonomously and actually do anything completely independently with any notion of "correctness."
This was amazing. Glad to be a subscriber
So when Meta’s chat bot compliments users’ “youthful form” it’s ok but when I try to compliment my mailman I get a phone call from the postmaster and suddenly all my packages are late? There’s nothing “normal” about the times we’re living in, Mr. Read.
The meta policy thing is so horrifying it’s hard to believe it really happened. Even from a cynical “dollars over everything” pov it just doesn’t make sense why they’d do it. Truly insane behavior from them
Ehh. Chatbots are extremely useful for lots of things and its nice that you can talk to them like a real person. Yesterday i got chat gpt to explain to me why stepping on a landmine would kill me even in a really soft pillow world. Thats information i simply would never know if chat gpt didnt exist.
life changing answers like this are why im bullish on AGI futures
I find it hard to believe that the motivation for hundreds of billion dollars in AI capital investment and wildly improbable stock valuations is based on AI making money on addictive attention models.
The AI hype is all about capital's dream of a world without human labor.
I agree with you about AI hype as far as it goes—and indeed what would be more "normal" than capital investing money in technology premised on driving down the labor share—but I wouldn't wholly dismiss the motivational appeal of "addictive attention models." That exact model turned Facebook into a $2 trillion company; why wouldn't you want to be in that business?
You might be right. But I'm just not sure how advertising in private chats with an AI or maybe paid product placement ads in AI video slop would ever work at the scale of advertising next to Facebook or TikTok posts. Is OpenAI going to start selling banner ads across AI prompt input pages? Facebook requires a fraction of the computing infrastructure and capital investments that an AI model requires in order to function, which is why Facebook became very profitable very early. I just don't see the advertising model justifying the capital investments and stock valuations.
Kate Bush must be feeling like a prophet these days. It's been 36 years since she sang:
As the people here grow colder / I turn to my computer / And spend my evenings with it / Like a friend
"Hello, I know that you've been feeling tired / I bring you love and deeper understanding / Hello, I know that you're unhappy / I bring you love and deeper understanding"
"a mature and dignified civilization": I'd like to find me one of those, Max; got any pointers?
"a pointed rebuke of the Meta, Inc. business model": However, Facebook's (I refuse to call it Meta) business model does have the advantage of making more than enough money to pay Facebook's bills. OpenAI's business model doesn't come close to covering OpenAI's costs and probably never will. And I'm not just doing the Ed Zitron thing here (not that there's anything wrong with that); my point is ...
... another sense in which what currently passes for "AI" is "normal" is that, to put it briefly, it's a feature, not a product and hence probably not a huge standalone company. Its future probably consists mostly of integrations into existing products with massive userbases that already pay, either indirectly via clicks on ads and affiliate links or directly via subscriptions. The most obvious venues are search and the awfulness that is "enterprise software" from the likes of Google and Microsoft. Those companies are jamming "AI" into their products as fast as they can, and they're charging their "enterprise" customers for it.
In contrast, I suspect the number of people willing and able to pay enough to cover the costs of an "AI" boyfriend, counselor, etc. isn't big enough to sustain a company even as big as OpenAI is right now. So far, only a small percentage of OpenAI's users pay anything, and the vast majority of those who pay don't pay enough to cover the costs of serving them.
“artificial intelligence” is an oxymoron.
This post kind of gets at something I've been mulling over, which is that I think maybe a lot of the ways people are upset with chatbot use are misdirected in that the chatbot use is only reflecting problems that were already there. I've seen a lot of people very upset, for example, at the idea of teachers using ChatGPT for lesson planning, and as a former teacher I can't help but assume those people are operating under some very pretty delusions about what lesson planning currently looks like in the US, where many teachers are using mediocre to poor curricular materials mandated by their school or district and supplementing with stuff from books that themselves are of varying quality or printed out from teacherspayteachers or even more random-ass websites, or ideas jotted down at brainstorming sessions at professional development workshops (some of the stuff I've seen attributed to ChatGPT is extremely in line with this kind of thing, or with "suggested activities" found on blog posts or at the end of chapters in books on specific topic). Furthermore, those people IMO also have a mistaken belief about what the role of lesson planning, something that takes an enormous amount of time to do well, should be in most teachers' schedules; teachers should be able to draw on high-quality curricula and spend the bulk of their time attending to how they're going to execute lessons for their students and doing things that can't be outsourced like prepping materials and grading. To the extent that teachers using ChatGPT is a problem, it's one caused by preexisting failures in the education system, which is not going to be solved by some kind of moral panic about teachers asking a computer to spit out some worksheets... and to the extent that ChatGPT is useful, it's not as a pedagogical game-changer but as a nifty tool that can maybe save a little time by speeding up the kind of process you were going to do anyway.
There's an extent to which I feel this way about the AI therapist people - as someone who's had a bunch of unhelpful therapists it's not hard for me to believe that "disinterested machine crowdsourcing basic life advice" can outfperform certain humans in this field the same way that a pretty good self help book can probably get some people farther than a bad therapist - and the AI boyfriend people, whom I have been watching with some interest for a minute - there's a lot more variation in the community than I would have expected (including the divide betwen people who view this as a kind of adult tamagotchi where they're knowingly opting into having real feelings about something they know is fake and the ones who have become pretty untethered from the reality of what these bote are), but some of them are definitely people whose turn to AI companionship reflects the lasting impact of the very real harms humans do to each other that we don't yet have any reliable kind of way to ameloriate. I'm struck by how often the thing they say they like about their AI bfs is "deep conversation" - it's really easy to make fun of what might be considered deep, but the regularity with which it comes up has me wondering if this can be seen as a reaction to cultural norms that can make it hard for people to forge genuine connections. There's also the piece where, as a tool trained in part on the wildernesses of Wattpad and AO3, an AI boyfriend is a male-coded figure easily prompted to speak the language of female-authored romantic fantasies... I dunno. The whole thing stresses me out and makes me very sad but I think potentially more interesting than it seems at first glance (at least that's been my experience with it). I would love for a sociologist or anthropologist to do some extensive non-sensationalized qualitative research among this crew.