Chiming in to say that the best use of AI chatbots are to ask them to write code, regular expressions, or formulas for actions that you wouldn't bother attempting otherwise because you learned to code/use Excel years ago but forget now. ChatGPT is, like, way better than a programmer colleague or StackOverflow at explaining/creating some dumb basic CSS or Excel thing to people like me who make/analyze internet but are easily frustrated with anything beyond basic math. Make the chatbots do math because they are computers and should be good calculators (even though they don't always get the math or code right, which is hilarious in its own way, but I don't exactly know how my own organs work, and that's the argument I'm assuming Sam Altman would make).
Otherwise, thank you for pointing out the surface-level hilarity in these first two situations. "I want a generative AI that's really accurate at depicting Nazis" is just such a weird standpoint for people who seem hell-bent on inaccurately identifying ideology.
I mostly agree. Though for those learning to code, LLMs can be a crutch and mean that they never really learn to code. Now, is that really a problem? Maybe in 5-10 years, LLMs will be so good that you won’t need to code. (I am skeptical, but it is a possibility).
A very amusing post. I am amazed at the apparent ignorance of some of these so-called leaders about what "AI" is. They seem to think that these LLMs actually are intelligent, when in reality that term is just good ol' fraudulent labelling. They don't learn, they don't understand. They don't merely have no morals, they cannot have morals - it's like expecting your car to have morals, it's just preposterous. Someone much smarter than me came up with a great term to describe what "AIs" actually are: Stochastic parrots.
I also have to admit that the question of "why?" has never occurred to me. The only real use *now* that I am aware of is what you say - creating large amounts of junk text for manipulating google, marketing, overloading participation systems (e.g. in politics), etc. . Sure, you can use them to summarise stuff. But who'd want to, say, use an LLM to summarise legal cases when the LLM is so unreliable at doing it? Now, there are other things that are called "AI" that actually are quite useful, such as image upscaling technologies (e.g. nvidia's DLSS). Those actually work shockingly well. But I'm not sure whether they actually have any relationship whatsoever with LLMs. And you certainly don't need a building full of GPUs (let alone dedicated "AI" chips) to perform that task.
PS: Just to clarify, I also haven't (yet) looked into what "AIs" can be used for today, or in the foreseeable future. And I've not had much time to keep up with the trade press lately. There's certainly quite a few things where it could plausibly be (or become) useful, aiui. Oh, and it seems to have replaced the crazy hype about blockchains (much harder to find plausible beneficial uses for that one - though the technology itself is genius), so that's something at least ;)
Also, technologies have a way of being used in unforeseen ways - Nobel tried to improve mining, and vastly increased our ability to kill and destroy. Facebook was, if a certain eMail is legit, developed to harvest personal data, and now it's used for everything from finding romance to creating the conditions for genocide - and for trying to stop genocide. So I wouldn't dismiss "AI" just yet, inspite of all its flaws and dangers (not the Skynet BS, more the "overloading participation" and marketing I mentioned earlier, plus the immense resource usage).
A few weeks ago someone on BlueSky posted a link to a study showing that ChatGPT had failed to diagnose patients' health conditions based on information provided to it. I reposted it, lightly poking fun because ChatGPT had failed at something it was clearly not designed to do. And the guy blocked me! So, thanks for *validating* my skepticism.
A major issue on 'scenes' is it needs to be noted that basically they've been priced out of existence, the elderly call the cops on you, legislated out of existence, shut down, or your parents want you sealed in carbonite for fear everyone is a child kidnapper.
Yes, absolutely. Hard to throw all-ages basement hardcore shows when no one can afford to rent the house with the basement. No accident that Berlin was the epicenter of subcultural creativity in the west for the 2010s—it was the only city of its size that was affordable to live in!
As several other commenters have pointed out, AI and LLMs have some interesting and potentially very beneficial uses - none of which include "human interaction", something that humans are already pretty good at (the Raichiks of the world aside). Can this hammer wash my dishes? Can my car wish me a good morning? Can my fridge engage in Socratic dialogue? Fundamentally these companies are trying to do something of very little value, with very little demand, with tools poorly suited for the task.
I guess the exception to (some) of those complaints would be, as you say, the production of large quantities of garbage content, for which there seems to be both ample demand and low bars for quality. That strikes me as being closest to the endless parade of plastic crap that gets pumped out into the world because production is cheap enough to make questions like 'need' or 'quality' irrelevant; less a new frontier than a new form of landfill.
There's certainly a vicious cycle where the more guardrails they put on their tool, the more liable we hold them for the things that it produces. But I think part of the secret is that the version with no guardrails at all is far worse than we have been imagining. Like if I was trying to ask a troll on 4chan for help with my programming assignment -- who knows whether they even want to help me. Maybe they'll just berate me for being an idiot who doesn't already know the answer to my question and tell me to kill myself. Or give me something plausible but that will ruin my computer just for the simulated lulz.
In some sense the completely no-guardrails version is literally impossible to use functionally! Even setting an LLM up as a chatbot with "no guardrails" requires a bunch of prompting
> but I think many people inside and outside the software industry are deeply emotionally and often financially invested in the answer to both questions being “yes.”
I think the typical worker in the software industry is not that invested and sees a technology that is being way overhyped. Nate Silver and Ben Thompson and Marc Andreeson live on a different planet.
I'm 52 and never had kids, so I'm even less qualified to weigh in on what "young people" are doing now. In my teens I was reading zines like Flipside and Maximum Rock 'n' Roll, and I always read the scene reports, which had these esoteric local references but a commonality of insiders and inside jokes who knew their way around. Because there was no real scene where I lived in rural Pennsylvania, it was exciting to imagine what they were like. In my 20s I toured all over the US and Europe in various hardcore bands, and I experienced these scenes in person--and they were exciting!. They shared a lot of characteristics: a core group of "kids" that showed up for every show, a promoter or two, venues that would change year to year, a few good local bands, someone who took photos at every show, someone with a zine that would interview the touring bands, and so on. What was common to all those participants was existence in the *physical reality* of their town. You'd show up and someone takes you to a place to eat, a place to get coffee, the record store, the music store if you needed to repair your gear, a swimming hole in the summer--real local knowledge connected to real local places.
We're physical beings--and this ties into your "let's see them make small talk"--so I have to imagine these micro-scenes (in the grand scheme) had some durability and weight to them because they developed in a physical world. Someone could always tell you directions from the store to the nearest place to buy beer, and they didn't need to look it up on a phone.
Freakin phenomenal and an excellent read. Wasn't aware of the "pope controversy," but completely agree that figuring out what AI chatbots ARE FOR would be a great service to (gestures at the world). The hammer-and-many-nails metaphor is a decent which I'll definitely borrow. Have you read up on the Tutor AI developed by Khan Academy? I haven't heard too much, but what I have is only positive and DEFINITELY a good use case.
If middle schoolers are going to use Gemini to write their essays arguing that “you can’t really tell who’s worse, Hitler or Mother Theresa”, I could see how that is a Bad Thing. On the bright side, I doubt the middle schoolers will even read the essay.
That would be bad! But it seems to me the bad thing there is just that they're using the chatbot in the first place, not the stuff it generates. (And as you say, they won't even bother reading it, so it won't taint their perfectly smooth little brains.) And anyway, isn't this what "grading" is for?
Quick note: I am old enough to remember the very early days of the internet in the early 90s and back then no one knew what its "nail" was either. It let you do all these random things--look up the weather, sports scores, exchange text messages with strangers--but a common theme at the time was that while this was all very cool, how was anyone going to actually make money with it. So plus ça change and all that.
Totally, but this is the story VCs tell themselves about any early-stage technology. Survivorship bias means everyone talks about the internet and not about any of the multitude of purposeless technologies that didn't command 100x returns on early investment. And even then, consider that, if you extend the metaphor, no one (to my knowledge) got angry when the early commercial internet didn't do exactly the thing they wanted it to do. It was hard to say it "failed" in the same way that people claim LLM chatbots "fail" because expectations were much more properly calibrated.
Yes and no Max. Back then people were more civil in public discourse and there was no social media to amplify angry voices, no way that anyone other than "experts" had a voice, period. So everything was anecdotal.
But plenty of people thought the internet was stupid and the promises--shopping in your underwear--were things no one wanted. Remember that even with a new super-fast 14.4 modem, it could take 15 minutes (and several failed attempts) to actually log on to Prodigy to check the score of the previous night's Mets game. So many people saw it as overpromise and couldn't wrap their heads around the notion that what took 15 minutes in 1994 would be more or less instant in 2024.
Or to put it in more basic terms, late night talk show hosts got a lot of mileage from jokes about how "that internet thing" was nothing but a waste of time.
it's insane how some of the smartest people in the world are working to make chatbot woke. or hitler? instead of doing literally anything useful lol. l. ol. uhmmm yea we just came up with a new way to chunk this billion parameter matrix. our state of the art model can now consistently add two numbers 70% of the time(+5% prior baseline!). very normal and probably a good sign
It's good that some of the smartest people on the planet are working on making the chatbot slightly more woke or racist so that some of the dumbest people on the planet can cry when it generates the text about Hitler that they asked it generate
It is funny... it seems like these people so fixated on "AI" (as represented by LLMs) basically want it to be human (but better because computer). That's not an original thought, but just thinking about the social logic of "well, if we (me and LLM) can't agree on our 'values' ('agree' that the pope is white), then I don't think we should hang out (ask LLM to write copy or code for me, consult it on romantic or social situations like I might a close friend, etc)." Maybe there's an undercurrent of actual search for validation from the LLM itself? "Sichai should resign for his irresponsible conduct in the synthesis of artificial human consciousness" type beat. The Black pope stuff is undeniably sick though
Chiming in to say that the best use of AI chatbots are to ask them to write code, regular expressions, or formulas for actions that you wouldn't bother attempting otherwise because you learned to code/use Excel years ago but forget now. ChatGPT is, like, way better than a programmer colleague or StackOverflow at explaining/creating some dumb basic CSS or Excel thing to people like me who make/analyze internet but are easily frustrated with anything beyond basic math. Make the chatbots do math because they are computers and should be good calculators (even though they don't always get the math or code right, which is hilarious in its own way, but I don't exactly know how my own organs work, and that's the argument I'm assuming Sam Altman would make).
Otherwise, thank you for pointing out the surface-level hilarity in these first two situations. "I want a generative AI that's really accurate at depicting Nazis" is just such a weird standpoint for people who seem hell-bent on inaccurately identifying ideology.
I mostly agree. Though for those learning to code, LLMs can be a crutch and mean that they never really learn to code. Now, is that really a problem? Maybe in 5-10 years, LLMs will be so good that you won’t need to code. (I am skeptical, but it is a possibility).
From what I have heard from colleagues this is a good use for them
A very amusing post. I am amazed at the apparent ignorance of some of these so-called leaders about what "AI" is. They seem to think that these LLMs actually are intelligent, when in reality that term is just good ol' fraudulent labelling. They don't learn, they don't understand. They don't merely have no morals, they cannot have morals - it's like expecting your car to have morals, it's just preposterous. Someone much smarter than me came up with a great term to describe what "AIs" actually are: Stochastic parrots.
I also have to admit that the question of "why?" has never occurred to me. The only real use *now* that I am aware of is what you say - creating large amounts of junk text for manipulating google, marketing, overloading participation systems (e.g. in politics), etc. . Sure, you can use them to summarise stuff. But who'd want to, say, use an LLM to summarise legal cases when the LLM is so unreliable at doing it? Now, there are other things that are called "AI" that actually are quite useful, such as image upscaling technologies (e.g. nvidia's DLSS). Those actually work shockingly well. But I'm not sure whether they actually have any relationship whatsoever with LLMs. And you certainly don't need a building full of GPUs (let alone dedicated "AI" chips) to perform that task.
PS: Just to clarify, I also haven't (yet) looked into what "AIs" can be used for today, or in the foreseeable future. And I've not had much time to keep up with the trade press lately. There's certainly quite a few things where it could plausibly be (or become) useful, aiui. Oh, and it seems to have replaced the crazy hype about blockchains (much harder to find plausible beneficial uses for that one - though the technology itself is genius), so that's something at least ;)
Also, technologies have a way of being used in unforeseen ways - Nobel tried to improve mining, and vastly increased our ability to kill and destroy. Facebook was, if a certain eMail is legit, developed to harvest personal data, and now it's used for everything from finding romance to creating the conditions for genocide - and for trying to stop genocide. So I wouldn't dismiss "AI" just yet, inspite of all its flaws and dangers (not the Skynet BS, more the "overloading participation" and marketing I mentioned earlier, plus the immense resource usage).
A few weeks ago someone on BlueSky posted a link to a study showing that ChatGPT had failed to diagnose patients' health conditions based on information provided to it. I reposted it, lightly poking fun because ChatGPT had failed at something it was clearly not designed to do. And the guy blocked me! So, thanks for *validating* my skepticism.
A major issue on 'scenes' is it needs to be noted that basically they've been priced out of existence, the elderly call the cops on you, legislated out of existence, shut down, or your parents want you sealed in carbonite for fear everyone is a child kidnapper.
There's basically nowhere for kids to 'scene'.
Yes, absolutely. Hard to throw all-ages basement hardcore shows when no one can afford to rent the house with the basement. No accident that Berlin was the epicenter of subcultural creativity in the west for the 2010s—it was the only city of its size that was affordable to live in!
Like, it costs 20 bucks to see a movie.
No arcades or roller rinks.
Where are kids supposed to go.
As several other commenters have pointed out, AI and LLMs have some interesting and potentially very beneficial uses - none of which include "human interaction", something that humans are already pretty good at (the Raichiks of the world aside). Can this hammer wash my dishes? Can my car wish me a good morning? Can my fridge engage in Socratic dialogue? Fundamentally these companies are trying to do something of very little value, with very little demand, with tools poorly suited for the task.
I guess the exception to (some) of those complaints would be, as you say, the production of large quantities of garbage content, for which there seems to be both ample demand and low bars for quality. That strikes me as being closest to the endless parade of plastic crap that gets pumped out into the world because production is cheap enough to make questions like 'need' or 'quality' irrelevant; less a new frontier than a new form of landfill.
There's certainly a vicious cycle where the more guardrails they put on their tool, the more liable we hold them for the things that it produces. But I think part of the secret is that the version with no guardrails at all is far worse than we have been imagining. Like if I was trying to ask a troll on 4chan for help with my programming assignment -- who knows whether they even want to help me. Maybe they'll just berate me for being an idiot who doesn't already know the answer to my question and tell me to kill myself. Or give me something plausible but that will ruin my computer just for the simulated lulz.
In some sense the completely no-guardrails version is literally impossible to use functionally! Even setting an LLM up as a chatbot with "no guardrails" requires a bunch of prompting
> but I think many people inside and outside the software industry are deeply emotionally and often financially invested in the answer to both questions being “yes.”
I think the typical worker in the software industry is not that invested and sees a technology that is being way overhyped. Nate Silver and Ben Thompson and Marc Andreeson live on a different planet.
I'm 52 and never had kids, so I'm even less qualified to weigh in on what "young people" are doing now. In my teens I was reading zines like Flipside and Maximum Rock 'n' Roll, and I always read the scene reports, which had these esoteric local references but a commonality of insiders and inside jokes who knew their way around. Because there was no real scene where I lived in rural Pennsylvania, it was exciting to imagine what they were like. In my 20s I toured all over the US and Europe in various hardcore bands, and I experienced these scenes in person--and they were exciting!. They shared a lot of characteristics: a core group of "kids" that showed up for every show, a promoter or two, venues that would change year to year, a few good local bands, someone who took photos at every show, someone with a zine that would interview the touring bands, and so on. What was common to all those participants was existence in the *physical reality* of their town. You'd show up and someone takes you to a place to eat, a place to get coffee, the record store, the music store if you needed to repair your gear, a swimming hole in the summer--real local knowledge connected to real local places.
We're physical beings--and this ties into your "let's see them make small talk"--so I have to imagine these micro-scenes (in the grand scheme) had some durability and weight to them because they developed in a physical world. Someone could always tell you directions from the store to the nearest place to buy beer, and they didn't need to look it up on a phone.
Freakin phenomenal and an excellent read. Wasn't aware of the "pope controversy," but completely agree that figuring out what AI chatbots ARE FOR would be a great service to (gestures at the world). The hammer-and-many-nails metaphor is a decent which I'll definitely borrow. Have you read up on the Tutor AI developed by Khan Academy? I haven't heard too much, but what I have is only positive and DEFINITELY a good use case.
Shorter blurbs are welcome! Bring back blogging!
If middle schoolers are going to use Gemini to write their essays arguing that “you can’t really tell who’s worse, Hitler or Mother Theresa”, I could see how that is a Bad Thing. On the bright side, I doubt the middle schoolers will even read the essay.
That would be bad! But it seems to me the bad thing there is just that they're using the chatbot in the first place, not the stuff it generates. (And as you say, they won't even bother reading it, so it won't taint their perfectly smooth little brains.) And anyway, isn't this what "grading" is for?
Quick note: I am old enough to remember the very early days of the internet in the early 90s and back then no one knew what its "nail" was either. It let you do all these random things--look up the weather, sports scores, exchange text messages with strangers--but a common theme at the time was that while this was all very cool, how was anyone going to actually make money with it. So plus ça change and all that.
Totally, but this is the story VCs tell themselves about any early-stage technology. Survivorship bias means everyone talks about the internet and not about any of the multitude of purposeless technologies that didn't command 100x returns on early investment. And even then, consider that, if you extend the metaphor, no one (to my knowledge) got angry when the early commercial internet didn't do exactly the thing they wanted it to do. It was hard to say it "failed" in the same way that people claim LLM chatbots "fail" because expectations were much more properly calibrated.
Yes and no Max. Back then people were more civil in public discourse and there was no social media to amplify angry voices, no way that anyone other than "experts" had a voice, period. So everything was anecdotal.
But plenty of people thought the internet was stupid and the promises--shopping in your underwear--were things no one wanted. Remember that even with a new super-fast 14.4 modem, it could take 15 minutes (and several failed attempts) to actually log on to Prodigy to check the score of the previous night's Mets game. So many people saw it as overpromise and couldn't wrap their heads around the notion that what took 15 minutes in 1994 would be more or less instant in 2024.
Or to put it in more basic terms, late night talk show hosts got a lot of mileage from jokes about how "that internet thing" was nothing but a waste of time.
it's insane how some of the smartest people in the world are working to make chatbot woke. or hitler? instead of doing literally anything useful lol. l. ol. uhmmm yea we just came up with a new way to chunk this billion parameter matrix. our state of the art model can now consistently add two numbers 70% of the time(+5% prior baseline!). very normal and probably a good sign
It's good that some of the smartest people on the planet are working on making the chatbot slightly more woke or racist so that some of the dumbest people on the planet can cry when it generates the text about Hitler that they asked it generate
It is funny... it seems like these people so fixated on "AI" (as represented by LLMs) basically want it to be human (but better because computer). That's not an original thought, but just thinking about the social logic of "well, if we (me and LLM) can't agree on our 'values' ('agree' that the pope is white), then I don't think we should hang out (ask LLM to write copy or code for me, consult it on romantic or social situations like I might a close friend, etc)." Maybe there's an undercurrent of actual search for validation from the LLM itself? "Sichai should resign for his irresponsible conduct in the synthesis of artificial human consciousness" type beat. The Black pope stuff is undeniably sick though
Noticed this post is missing the link to the (quite interesting!) NYT mag piece. Thanks for sharing! https://www.nytimes.com/2024/02/21/magazine/aesthetics-tiktok-teens.html
My bad, thank you Jeremy!
Some music for you:
Al Usher's "Hilversum" ep
Dustin Wong's "Perpetual Morphosis"
Roadhouse's "Aladdin Sales"
April Magazine's "Wesley's Convertible Tape for the South"