Even before I understood ALL the ways in which cryptocurrencies were awful in the dumbest of ways, I had this watershed moment where I lost all interest in what had, before that, been hyped up to me as the tech of the future: the realization that all it ever was was money. Just speculation. And anything else being touted around it was just reputation laundering.
I had a similar moment with AI, when I understood that the main driver behind our collective interest in it is just getting people fired. That is it. If you can automate a task, you don't have to pay people to do it. And that is why every startup needs to be AI based, because it means cheaper operational costs. At the risk of being overly reductive, it struck me really hard that to a large or at least definitely non-trivial degree, getting rid of workers is probably the biggest driver behind AI research. Not to have a "brave new world", just a cheaper old & cowardly one.
Unfortunately I think some form of that is at the bottom of almost every big business. Especially if there’s a lot of VC or other big investor money behind it. Someone may originally have had an idea they thought was cool or beneficial in some way, but as soon as the business guys got their hands on it the only point was money, and in many cases that means cutting labor costs. Private equity is about nothing else at all--roll up companies, squeeze the money out of them by cutting everything you can think of, then get out before the withered husk collapses.
It’s the curse of ever-increasing profits. At a certain point there’s not much else to do than cut labor costs if you need to improve the bottom line. Make the product or deliver the service with fewer workers, or dumb down the necessary processes enough that you can replace “skilled” with “non-skilled.”
Until we start to value stable, steady companies over perpetual growth machines, this logic will continue to dominate us, and eventually put most of us out of work.
You're right about the pervasiveness of these banal motives behind every aspect of industry, but I still think that with AI there's something particularly underwhelming about the way it translates into profit.
Compare it to pharma. An industry that is driven by money and does all sorts of morally questionable things, but at the end of the day a drug that cures cancer is a tremendous advance that makes the world qualitatively different. This is not a defense of pharma, I think without the extensive privatization of medical research the world would still be able to conduct it, maybe even better. My point rather is that, despite the ugliness, there is still something there to be excited about.
A call center that is operated entirely by software, on the other hand, is just an underwhelming, even more soulless version of an already bleak thing (the call center). This is all just about achieving the efficiency of human capacity without the friction of human interaction.
My Father was an accountant before the invention of Excel and the widespread adoption of calculators (which was not that long ago), he told me that everyone thought they would be put out of work by these inventions and he was genuinely worried he would be laid off, this of course did not happen and I believe "AI" going forward will be as exactly "disruptive" as calculators and excel, there will be terrible things and people that want to do terrible things will have some new tools but it will not fundamentally change anything about humanity
One thing I think that is never discussed is how we may actually be in somewhat of a high tide point as far as even this type of generative AI is concerned.
In order to continue to improve, these large models often need to continuously ingest data. That’s all well and good when the models are gated in the Googles and OpenAIs of the world. For the last several years/decades companies have had access to an expanding trove of human generated content to feed into the models, with the only limitation being the effort required to tag and categorize said data by humans.
Except, once more tools come online for consumer use, more content is going to be published that was generated by a model. That content in turn will be absorbed into newer models. Think of people who put code repos up on GitHub, or who submit answers to Stack Overflow with code created by Microsoft’s CoPilot model. Or who use GPT-3/4 for SEO hacks that end up promoting scammy sites to the top of search engine results. It’s highly likely we will reach a point where the output stops improving and in fact starts to regress due to the garbage in garbage out principle.
It seems to me that the best way to ensure some sort of "quality" for trained AI is to keep it far, far away from social media. If AI is being trained on the vast ill-informed fever swamp of social media, it will be utterly useless.
I haven’t read all the comments so maybe I’m rehashing another critique, but it’s important to bring up the common sense knowledge problem, and that knowledge isn’t just found in a mind (or AI mind), but in the world, to be experienced.
Being in the world is the most important model of being. Knowledge shows up in the familiarity of things, and when we learn something about the world things look different. We as humans don’t learn new facts about the world we learn the world keeps changing the way it looks for us. I’m not sure AI has solved this.
It’s essentially another question to your handful about how is AI becoming actually closer to human besides being able to think in a Cartesian way. How about actually being? Without being it won’t replace humans as well as it is desired.
Intelligence can’t be reduced to symbol manipulation to represent reality, as AI does to “learn,” it also needs unconscious processes that humans rely on.
I may be behind the times, but it’s a good question to ask if AI has solved this yet.
I feel like the distinction missing here is between AI as a concept, and AI as currently exists, via ChatGPT and so forth. By the former, I mean the idea that there's absolutely no reason a sufficiently powerful computer can't do everything a human brain can do, and by extension no reason a MORE powerful computer can't do significantly MORE than a human brain can do. If we ever achieve AI like that, orders of magnitude smarter than any human, it will be genuinely transformative. Think about every scientific discovery humans have made that has made life better and easier, and imagine a machine that could duplicate all of those in an afternoon, and then start making more discoveries. The world will turn upside down, hopefully in a good way, maybe in a bad way. I believe that will exist some day, but the question is when.
Currently existing AI obviously isn't there yet, but the question is whether the current method of neural networks and training algorithms will get us there, just with more processing and larger data sets, or whether it's a dead end and we need a totally different approach to get true superintelligence. I don't know, and neither does anyone else. Some people like to denigrate current AI by saying it's just pattern-matching, which is true on some level, but it's surprising difficult to rigorously point out ways in which human intelligence differs from pattern& matching at a high enough degree of abstraction. It's totally possible that the future of AI is a scaled up version of what currently exists. I think that what currently exists is basically party tricks, but it could scale up into something way bigger really fast.
As for the “why is this big now” - i think it’s partly because we have the technological means to train models big enough that the magic happens. The basic technique of neural networks with back propagation was invented by a Canadian in the 80s; no one thought it was interesting then, but the same method is being used today for all these fun applications simply because we have big enough GPUs now.
The other piece, too, is that they’ve hit on a structurally helpful business model for creating advances in AI that can actually be used by developers easily, and on which anyone can build a business. In other words, they’ve learned how to make platforms, which let the specific advances be useful to many. The developer experience of using AI has come a very long way, because now you just need to take a model that someone else has made, and you try to make it into something profitable. That means you have legions more people trying to find the killer app, so more killer apps get made. It’s kinda like a specialization thing? Big platforms fund the work of making big models, so then developers can focus on finding a way to make a good customer-facing piece on top of it.
Also a decade ago I don’t know how many people understood just how much better a model trained on ginormous amounts of data would be. I think we all thought it would be good, but not magical. Turns out there’s some weird threshold after which this shit is really spooky and cool and good, not just weird (eg that reverse dream thing that made all the images with eyeballs).
Also also, re: what’s the scene like... machine learning peeps are the only tech folks I ever liked hanging out with. They’re smart and interesting and weird, and if you ever need a specific source to talk to, lmk! I still keep in touch with some. Part of it is that it requires really big brain math and stats skills, and also some small changes can lead to enormous improvements, and so you get some interesting types of people. Nice weirdos who are risk-seeking smart buds.
For God's sake, don't. Yudkowsky doesn't know what the hell he's talking about and never has, and the culture he's built up around himself reflects that.
That predates the latest trend of generative models which have gotten a lot of buzz, but gets a lot of the behind the scenes right.
My personal feeling is that the tech here is much more than a party trick and will have real consequences, but the economic impact isn't going to be as drastic as the maximalists predict. I think an area that's underdiscussed is what these generative models are going to do to hobbies and the social communities around them. Right now, creative pursuits are sort of a bright spot in the era of loneliness where amateurs can do something fulfilling and find a modest audience or community online. Unfortunately, I think that stuff is going to take a huge hit. (Digital artists have freaked out about this a little bit, but I think it's going to be a real shitshow when it happens for music. And it's basically a certainty that it's gonna happen in the next five years.) I hope I'm wrong! And of course hobbies with no audience can still be quite personally meaningful. But I think it's going to be painful to have so many artistic skills suddenly become anachronistic.
Oh, also, what happens when someone finally unleashes one of these generative models at full power on porn? Jesus christ.
-I play bluegrass music. It's a musical tradition, the whole point of the music is to play it. I really don't see traditional music ever being automated. Like, what would be the point of that? It's about as appealing as automating an AI to go experience being in a national park for us.
-As for pop music, I wouldn't be surprised if the "hits" of the future were generated by AI. (Or, if they already are doing this today--I'm not sure I can tell the difference anyway.) But you'll always need a human to be the narcissistic focal point absorbing adoration from fans. I don't see AI disrupting this, maybe just streamlining it a bit.
-There will always be a market for live performances by humans. Even during the pandemic, there wasn't really a good substitute for live music.
Overall, the people who are in trouble here are the ones writing music for mega pop stars and annoying jingles for carpet stores.
The cynical view of why AI is suddenly hot would be that "crypto" is finally dead and Silicon Valley VCs need something new to hype up. Certainly the generative models producing photorealistic images helped break through to the mainstream too -- everyone loves an eye-catching image.
But I think the real change in recent years is that the tech for training very large models (that is, cloud providers with large GPU instances, PyTorch, etc) finally became accessible to people outside Google. And now they are building lots of interesting things with it.
And yes, "AI" is mostly party tricks and toys. The real applications will be far more mundane. Think thing like: the automated chatbot you have to talk to when you want to cancel your insurance. It might get slightly more convincing and natural-sounding. But it's still just going to be a convoluted interface over a shitty CRM database, with an army of humans tasked with handling all its mistakes. (See also: https://www.theguardian.com/technology/2022/dec/13/becoming-a-chatbot-my-life-as-a-real-estate-ais-human-backup)
Or a more positive but equally mundane example would be the Pixel Camera Magic Eraser feature, which is basically Stable Diffusion at your literal fingertip, except its job is to just edit out the dog shit on the lawn in the background of the photo of your smiling kid.
I’ve researched AI and data a long time! I have lots of thoughts! But I’m also trying to get a baby to bed! Probably a good start is very-informed-but-somewhat-academic-Substack The Gradient. They have a year in review. As good a start as any. Browse their archives. https://open.substack.com/pub/thegradientpub/p/update-40-ais-year-in-review-and
My two biggest concerns about the AI jamboree are (1) the misuse of it to create "realistic" audio and video content for the intention of misleading people to the point where audio and video become useless for recording actual history, and (2) the usurpation of human creativity, resulting in a dumbing down and homogenization of art and music because AI works a lot cheaper than actual creative people do, but lacks the imagination, human experience, and emotion that generate truly great art..
AI is currently hot because AI tools are finally in the hands of consumers in a form that is simple to use and it's relatively inexpensive if not free.
Mainstream humanity has been interacting with AI (e.g. google search, social media feeds, online ads) for nearly two decades but instead of the companies abstracting AI from the end user experience, the balance has significantly shifted and now we are more free to explore how AI can help us.
We are now closer to AI (or at least feel we are). The democratisation and accessibility of AI is everything.
Re: the “magical,” “black box” qualities people have started to ascribe to AI systems, I keep going back to this paper last year that proves all neural networks are reducible, in the end, to decision trees. There’s nothing fundamentally mystical going on there. https://arxiv.org/abs/2210.05189
Even before I understood ALL the ways in which cryptocurrencies were awful in the dumbest of ways, I had this watershed moment where I lost all interest in what had, before that, been hyped up to me as the tech of the future: the realization that all it ever was was money. Just speculation. And anything else being touted around it was just reputation laundering.
I had a similar moment with AI, when I understood that the main driver behind our collective interest in it is just getting people fired. That is it. If you can automate a task, you don't have to pay people to do it. And that is why every startup needs to be AI based, because it means cheaper operational costs. At the risk of being overly reductive, it struck me really hard that to a large or at least definitely non-trivial degree, getting rid of workers is probably the biggest driver behind AI research. Not to have a "brave new world", just a cheaper old & cowardly one.
Unfortunately I think some form of that is at the bottom of almost every big business. Especially if there’s a lot of VC or other big investor money behind it. Someone may originally have had an idea they thought was cool or beneficial in some way, but as soon as the business guys got their hands on it the only point was money, and in many cases that means cutting labor costs. Private equity is about nothing else at all--roll up companies, squeeze the money out of them by cutting everything you can think of, then get out before the withered husk collapses.
It’s the curse of ever-increasing profits. At a certain point there’s not much else to do than cut labor costs if you need to improve the bottom line. Make the product or deliver the service with fewer workers, or dumb down the necessary processes enough that you can replace “skilled” with “non-skilled.”
Until we start to value stable, steady companies over perpetual growth machines, this logic will continue to dominate us, and eventually put most of us out of work.
You're right about the pervasiveness of these banal motives behind every aspect of industry, but I still think that with AI there's something particularly underwhelming about the way it translates into profit.
Compare it to pharma. An industry that is driven by money and does all sorts of morally questionable things, but at the end of the day a drug that cures cancer is a tremendous advance that makes the world qualitatively different. This is not a defense of pharma, I think without the extensive privatization of medical research the world would still be able to conduct it, maybe even better. My point rather is that, despite the ugliness, there is still something there to be excited about.
A call center that is operated entirely by software, on the other hand, is just an underwhelming, even more soulless version of an already bleak thing (the call center). This is all just about achieving the efficiency of human capacity without the friction of human interaction.
My Father was an accountant before the invention of Excel and the widespread adoption of calculators (which was not that long ago), he told me that everyone thought they would be put out of work by these inventions and he was genuinely worried he would be laid off, this of course did not happen and I believe "AI" going forward will be as exactly "disruptive" as calculators and excel, there will be terrible things and people that want to do terrible things will have some new tools but it will not fundamentally change anything about humanity
One thing I think that is never discussed is how we may actually be in somewhat of a high tide point as far as even this type of generative AI is concerned.
In order to continue to improve, these large models often need to continuously ingest data. That’s all well and good when the models are gated in the Googles and OpenAIs of the world. For the last several years/decades companies have had access to an expanding trove of human generated content to feed into the models, with the only limitation being the effort required to tag and categorize said data by humans.
Except, once more tools come online for consumer use, more content is going to be published that was generated by a model. That content in turn will be absorbed into newer models. Think of people who put code repos up on GitHub, or who submit answers to Stack Overflow with code created by Microsoft’s CoPilot model. Or who use GPT-3/4 for SEO hacks that end up promoting scammy sites to the top of search engine results. It’s highly likely we will reach a point where the output stops improving and in fact starts to regress due to the garbage in garbage out principle.
It seems to me that the best way to ensure some sort of "quality" for trained AI is to keep it far, far away from social media. If AI is being trained on the vast ill-informed fever swamp of social media, it will be utterly useless.
I assume you already read this but just in case:
https://dynomight.net/llms/
I haven’t read all the comments so maybe I’m rehashing another critique, but it’s important to bring up the common sense knowledge problem, and that knowledge isn’t just found in a mind (or AI mind), but in the world, to be experienced.
Being in the world is the most important model of being. Knowledge shows up in the familiarity of things, and when we learn something about the world things look different. We as humans don’t learn new facts about the world we learn the world keeps changing the way it looks for us. I’m not sure AI has solved this.
It’s essentially another question to your handful about how is AI becoming actually closer to human besides being able to think in a Cartesian way. How about actually being? Without being it won’t replace humans as well as it is desired.
Intelligence can’t be reduced to symbol manipulation to represent reality, as AI does to “learn,” it also needs unconscious processes that humans rely on.
I may be behind the times, but it’s a good question to ask if AI has solved this yet.
I feel like the distinction missing here is between AI as a concept, and AI as currently exists, via ChatGPT and so forth. By the former, I mean the idea that there's absolutely no reason a sufficiently powerful computer can't do everything a human brain can do, and by extension no reason a MORE powerful computer can't do significantly MORE than a human brain can do. If we ever achieve AI like that, orders of magnitude smarter than any human, it will be genuinely transformative. Think about every scientific discovery humans have made that has made life better and easier, and imagine a machine that could duplicate all of those in an afternoon, and then start making more discoveries. The world will turn upside down, hopefully in a good way, maybe in a bad way. I believe that will exist some day, but the question is when.
Currently existing AI obviously isn't there yet, but the question is whether the current method of neural networks and training algorithms will get us there, just with more processing and larger data sets, or whether it's a dead end and we need a totally different approach to get true superintelligence. I don't know, and neither does anyone else. Some people like to denigrate current AI by saying it's just pattern-matching, which is true on some level, but it's surprising difficult to rigorously point out ways in which human intelligence differs from pattern& matching at a high enough degree of abstraction. It's totally possible that the future of AI is a scaled up version of what currently exists. I think that what currently exists is basically party tricks, but it could scale up into something way bigger really fast.
As for the “why is this big now” - i think it’s partly because we have the technological means to train models big enough that the magic happens. The basic technique of neural networks with back propagation was invented by a Canadian in the 80s; no one thought it was interesting then, but the same method is being used today for all these fun applications simply because we have big enough GPUs now.
The other piece, too, is that they’ve hit on a structurally helpful business model for creating advances in AI that can actually be used by developers easily, and on which anyone can build a business. In other words, they’ve learned how to make platforms, which let the specific advances be useful to many. The developer experience of using AI has come a very long way, because now you just need to take a model that someone else has made, and you try to make it into something profitable. That means you have legions more people trying to find the killer app, so more killer apps get made. It’s kinda like a specialization thing? Big platforms fund the work of making big models, so then developers can focus on finding a way to make a good customer-facing piece on top of it.
Also a decade ago I don’t know how many people understood just how much better a model trained on ginormous amounts of data would be. I think we all thought it would be good, but not magical. Turns out there’s some weird threshold after which this shit is really spooky and cool and good, not just weird (eg that reverse dream thing that made all the images with eyeballs).
Also also, re: what’s the scene like... machine learning peeps are the only tech folks I ever liked hanging out with. They’re smart and interesting and weird, and if you ever need a specific source to talk to, lmk! I still keep in touch with some. Part of it is that it requires really big brain math and stats skills, and also some small changes can lead to enormous improvements, and so you get some interesting types of people. Nice weirdos who are risk-seeking smart buds.
> Am I going to have to (gulp) read LessWrong?
For God's sake, don't. Yudkowsky doesn't know what the hell he's talking about and never has, and the culture he's built up around himself reflects that.
I think the best thing I've read that gets into some of nitty gritty of how AI works is Gideon Lewis-Kraus's piece from 2016 in the NYT:
https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html
That predates the latest trend of generative models which have gotten a lot of buzz, but gets a lot of the behind the scenes right.
My personal feeling is that the tech here is much more than a party trick and will have real consequences, but the economic impact isn't going to be as drastic as the maximalists predict. I think an area that's underdiscussed is what these generative models are going to do to hobbies and the social communities around them. Right now, creative pursuits are sort of a bright spot in the era of loneliness where amateurs can do something fulfilling and find a modest audience or community online. Unfortunately, I think that stuff is going to take a huge hit. (Digital artists have freaked out about this a little bit, but I think it's going to be a real shitshow when it happens for music. And it's basically a certainty that it's gonna happen in the next five years.) I hope I'm wrong! And of course hobbies with no audience can still be quite personally meaningful. But I think it's going to be painful to have so many artistic skills suddenly become anachronistic.
Oh, also, what happens when someone finally unleashes one of these generative models at full power on porn? Jesus christ.
Re: music. Some scattered thoughts.
-I play bluegrass music. It's a musical tradition, the whole point of the music is to play it. I really don't see traditional music ever being automated. Like, what would be the point of that? It's about as appealing as automating an AI to go experience being in a national park for us.
-As for pop music, I wouldn't be surprised if the "hits" of the future were generated by AI. (Or, if they already are doing this today--I'm not sure I can tell the difference anyway.) But you'll always need a human to be the narcissistic focal point absorbing adoration from fans. I don't see AI disrupting this, maybe just streamlining it a bit.
-There will always be a market for live performances by humans. Even during the pandemic, there wasn't really a good substitute for live music.
Overall, the people who are in trouble here are the ones writing music for mega pop stars and annoying jingles for carpet stores.
The cynical view of why AI is suddenly hot would be that "crypto" is finally dead and Silicon Valley VCs need something new to hype up. Certainly the generative models producing photorealistic images helped break through to the mainstream too -- everyone loves an eye-catching image.
But I think the real change in recent years is that the tech for training very large models (that is, cloud providers with large GPU instances, PyTorch, etc) finally became accessible to people outside Google. And now they are building lots of interesting things with it.
And yes, "AI" is mostly party tricks and toys. The real applications will be far more mundane. Think thing like: the automated chatbot you have to talk to when you want to cancel your insurance. It might get slightly more convincing and natural-sounding. But it's still just going to be a convoluted interface over a shitty CRM database, with an army of humans tasked with handling all its mistakes. (See also: https://www.theguardian.com/technology/2022/dec/13/becoming-a-chatbot-my-life-as-a-real-estate-ais-human-backup)
Or a more positive but equally mundane example would be the Pixel Camera Magic Eraser feature, which is basically Stable Diffusion at your literal fingertip, except its job is to just edit out the dog shit on the lawn in the background of the photo of your smiling kid.
I’ve researched AI and data a long time! I have lots of thoughts! But I’m also trying to get a baby to bed! Probably a good start is very-informed-but-somewhat-academic-Substack The Gradient. They have a year in review. As good a start as any. Browse their archives. https://open.substack.com/pub/thegradientpub/p/update-40-ais-year-in-review-and
My two biggest concerns about the AI jamboree are (1) the misuse of it to create "realistic" audio and video content for the intention of misleading people to the point where audio and video become useless for recording actual history, and (2) the usurpation of human creativity, resulting in a dumbing down and homogenization of art and music because AI works a lot cheaper than actual creative people do, but lacks the imagination, human experience, and emotion that generate truly great art..
AI is currently hot because AI tools are finally in the hands of consumers in a form that is simple to use and it's relatively inexpensive if not free.
Mainstream humanity has been interacting with AI (e.g. google search, social media feeds, online ads) for nearly two decades but instead of the companies abstracting AI from the end user experience, the balance has significantly shifted and now we are more free to explore how AI can help us.
We are now closer to AI (or at least feel we are). The democratisation and accessibility of AI is everything.
I haven’t finished it yet, but Atlas of AI is a great book for ai context
Re: the “magical,” “black box” qualities people have started to ascribe to AI systems, I keep going back to this paper last year that proves all neural networks are reducible, in the end, to decision trees. There’s nothing fundamentally mystical going on there. https://arxiv.org/abs/2210.05189
This ChatGPT explainer gets into some of the technical details, like what does this thing run on? https://gist.github.com/veekaybee/6f8885e9906aa9c5408ebe5c7e870698