24 Comments

Even before I understood ALL the ways in which cryptocurrencies were awful in the dumbest of ways, I had this watershed moment where I lost all interest in what had, before that, been hyped up to me as the tech of the future: the realization that all it ever was was money. Just speculation. And anything else being touted around it was just reputation laundering.

I had a similar moment with AI, when I understood that the main driver behind our collective interest in it is just getting people fired. That is it. If you can automate a task, you don't have to pay people to do it. And that is why every startup needs to be AI based, because it means cheaper operational costs. At the risk of being overly reductive, it struck me really hard that to a large or at least definitely non-trivial degree, getting rid of workers is probably the biggest driver behind AI research. Not to have a "brave new world", just a cheaper old & cowardly one.

Expand full comment

My Father was an accountant before the invention of Excel and the widespread adoption of calculators (which was not that long ago), he told me that everyone thought they would be put out of work by these inventions and he was genuinely worried he would be laid off, this of course did not happen and I believe "AI" going forward will be as exactly "disruptive" as calculators and excel, there will be terrible things and people that want to do terrible things will have some new tools but it will not fundamentally change anything about humanity

Expand full comment

One thing I think that is never discussed is how we may actually be in somewhat of a high tide point as far as even this type of generative AI is concerned.

In order to continue to improve, these large models often need to continuously ingest data. That’s all well and good when the models are gated in the Googles and OpenAIs of the world. For the last several years/decades companies have had access to an expanding trove of human generated content to feed into the models, with the only limitation being the effort required to tag and categorize said data by humans.

Except, once more tools come online for consumer use, more content is going to be published that was generated by a model. That content in turn will be absorbed into newer models. Think of people who put code repos up on GitHub, or who submit answers to Stack Overflow with code created by Microsoft’s CoPilot model. Or who use GPT-3/4 for SEO hacks that end up promoting scammy sites to the top of search engine results. It’s highly likely we will reach a point where the output stops improving and in fact starts to regress due to the garbage in garbage out principle.

Expand full comment

I assume you already read this but just in case:

https://dynomight.net/llms/

Expand full comment

I haven’t read all the comments so maybe I’m rehashing another critique, but it’s important to bring up the common sense knowledge problem, and that knowledge isn’t just found in a mind (or AI mind), but in the world, to be experienced.

Being in the world is the most important model of being. Knowledge shows up in the familiarity of things, and when we learn something about the world things look different. We as humans don’t learn new facts about the world we learn the world keeps changing the way it looks for us. I’m not sure AI has solved this.

It’s essentially another question to your handful about how is AI becoming actually closer to human besides being able to think in a Cartesian way. How about actually being? Without being it won’t replace humans as well as it is desired.

Intelligence can’t be reduced to symbol manipulation to represent reality, as AI does to “learn,” it also needs unconscious processes that humans rely on.

I may be behind the times, but it’s a good question to ask if AI has solved this yet.

Expand full comment

I feel like the distinction missing here is between AI as a concept, and AI as currently exists, via ChatGPT and so forth. By the former, I mean the idea that there's absolutely no reason a sufficiently powerful computer can't do everything a human brain can do, and by extension no reason a MORE powerful computer can't do significantly MORE than a human brain can do. If we ever achieve AI like that, orders of magnitude smarter than any human, it will be genuinely transformative. Think about every scientific discovery humans have made that has made life better and easier, and imagine a machine that could duplicate all of those in an afternoon, and then start making more discoveries. The world will turn upside down, hopefully in a good way, maybe in a bad way. I believe that will exist some day, but the question is when.

Currently existing AI obviously isn't there yet, but the question is whether the current method of neural networks and training algorithms will get us there, just with more processing and larger data sets, or whether it's a dead end and we need a totally different approach to get true superintelligence. I don't know, and neither does anyone else. Some people like to denigrate current AI by saying it's just pattern-matching, which is true on some level, but it's surprising difficult to rigorously point out ways in which human intelligence differs from pattern& matching at a high enough degree of abstraction. It's totally possible that the future of AI is a scaled up version of what currently exists. I think that what currently exists is basically party tricks, but it could scale up into something way bigger really fast.

Expand full comment

As for the “why is this big now” - i think it’s partly because we have the technological means to train models big enough that the magic happens. The basic technique of neural networks with back propagation was invented by a Canadian in the 80s; no one thought it was interesting then, but the same method is being used today for all these fun applications simply because we have big enough GPUs now.

The other piece, too, is that they’ve hit on a structurally helpful business model for creating advances in AI that can actually be used by developers easily, and on which anyone can build a business. In other words, they’ve learned how to make platforms, which let the specific advances be useful to many. The developer experience of using AI has come a very long way, because now you just need to take a model that someone else has made, and you try to make it into something profitable. That means you have legions more people trying to find the killer app, so more killer apps get made. It’s kinda like a specialization thing? Big platforms fund the work of making big models, so then developers can focus on finding a way to make a good customer-facing piece on top of it.

Expand full comment

> Am I going to have to (gulp) read LessWrong?

For God's sake, don't. Yudkowsky doesn't know what the hell he's talking about and never has, and the culture he's built up around himself reflects that.

Expand full comment

I think the best thing I've read that gets into some of nitty gritty of how AI works is Gideon Lewis-Kraus's piece from 2016 in the NYT:

https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html

That predates the latest trend of generative models which have gotten a lot of buzz, but gets a lot of the behind the scenes right.

My personal feeling is that the tech here is much more than a party trick and will have real consequences, but the economic impact isn't going to be as drastic as the maximalists predict. I think an area that's underdiscussed is what these generative models are going to do to hobbies and the social communities around them. Right now, creative pursuits are sort of a bright spot in the era of loneliness where amateurs can do something fulfilling and find a modest audience or community online. Unfortunately, I think that stuff is going to take a huge hit. (Digital artists have freaked out about this a little bit, but I think it's going to be a real shitshow when it happens for music. And it's basically a certainty that it's gonna happen in the next five years.) I hope I'm wrong! And of course hobbies with no audience can still be quite personally meaningful. But I think it's going to be painful to have so many artistic skills suddenly become anachronistic.

Oh, also, what happens when someone finally unleashes one of these generative models at full power on porn? Jesus christ.

Expand full comment

The cynical view of why AI is suddenly hot would be that "crypto" is finally dead and Silicon Valley VCs need something new to hype up. Certainly the generative models producing photorealistic images helped break through to the mainstream too -- everyone loves an eye-catching image.

But I think the real change in recent years is that the tech for training very large models (that is, cloud providers with large GPU instances, PyTorch, etc) finally became accessible to people outside Google. And now they are building lots of interesting things with it.

And yes, "AI" is mostly party tricks and toys. The real applications will be far more mundane. Think thing like: the automated chatbot you have to talk to when you want to cancel your insurance. It might get slightly more convincing and natural-sounding. But it's still just going to be a convoluted interface over a shitty CRM database, with an army of humans tasked with handling all its mistakes. (See also: https://www.theguardian.com/technology/2022/dec/13/becoming-a-chatbot-my-life-as-a-real-estate-ais-human-backup)

Or a more positive but equally mundane example would be the Pixel Camera Magic Eraser feature, which is basically Stable Diffusion at your literal fingertip, except its job is to just edit out the dog shit on the lawn in the background of the photo of your smiling kid.

Expand full comment

I’ve researched AI and data a long time! I have lots of thoughts! But I’m also trying to get a baby to bed! Probably a good start is very-informed-but-somewhat-academic-Substack The Gradient. They have a year in review. As good a start as any. Browse their archives. https://open.substack.com/pub/thegradientpub/p/update-40-ais-year-in-review-and

Expand full comment

AI is currently hot because AI tools are finally in the hands of consumers in a form that is simple to use and it's relatively inexpensive if not free.

Mainstream humanity has been interacting with AI (e.g. google search, social media feeds, online ads) for nearly two decades but instead of the companies abstracting AI from the end user experience, the balance has significantly shifted and now we are more free to explore how AI can help us.

We are now closer to AI (or at least feel we are). The democratisation and accessibility of AI is everything.

Expand full comment

I haven’t finished it yet, but Atlas of AI is a great book for ai context

Expand full comment

Re: the “magical,” “black box” qualities people have started to ascribe to AI systems, I keep going back to this paper last year that proves all neural networks are reducible, in the end, to decision trees. There’s nothing fundamentally mystical going on there. https://arxiv.org/abs/2210.05189

Expand full comment

This ChatGPT explainer gets into some of the technical details, like what does this thing run on? https://gist.github.com/veekaybee/6f8885e9906aa9c5408ebe5c7e870698

Expand full comment

A.i. as well as audio/video deep fakes will improve and mimic humans to the point where as far as trustworthiness, we will all go back to the way things were 100 years ago where people trusted only someone they knew, and what they said face to face, and maybe trusted other people that their few trusted ones trusted.

Expand full comment