15 Comments
User's avatar
Joshua Hughes's avatar

What I think is missing from this AI discourse is a dose of reality. The typical workflow at most (if not all) businesses that use any kind of transactional systems (ERPs, warehouse management systems, marketing tech) is to get emailed a spreadsheet, filter and sort it, probably override a few numbers, maybe VLOOKUP some data emailed by someone else, then re-format it for whatever other system the data needs to go into, with different names for everything field you got out of the previous systems--or maybe you even have to hand key it in because it takes too long to mass upload and your IT team is too busy. I don't get the impression most of the developers or fanboys and fangirls have worked in a regular office in a long time. It's pretty much the same as it was in 90s, but now you have to maintain data across even more systems that do not talk to each other despite what the consultants tell you. AGI will not solve this problem.

Expand full comment
Deborah Carver's avatar

glad I'm not the only one who has been singing "scaring the hoes" recently.

For me it's not the LLM tech itself—we have more than enough LLM tech available to "revolutionize" how a lot of business is done and make people plenty of money as is—but it's the accompanying business practices that need deep examination. Tech for the past 20 years has operated on false advertising, accountability dodges ("we don't know how it works") and, above all, marketing innovations to children to encourage early adoption and habit-forming behaviors that carry through to adulthood. We don't report on the harms of new tech until something catastrophic happens because these patterns now feel inevitable and not preventable. Discounting Adobe Creative Suite for college students turned into "put tablets in every elementary school!" turned into "hey kids! this computer chats with you and helps you make fan art and is also your social guide and best friend!"

The real market-shifting opportunity in today's LLMs (accelerated business process optimization) is not particularly ripe for engaging storytelling or jacking up valuations. It's easier for mass media to highlight the academic and investor "possibilities" of AI because that's relatively simple to source. It'd be helpful for tech media to focus less on the product but the long tail of culture surrounding tech adoption, but that's no good for the mass media business model because hey! media is aiming for habit-forming behaviors too. Scaring the hoes remains as lucrative as ever.

Expand full comment
Martin's avatar

I agree there is more substance to LLMs than to crypto and we shouldn't overgeneralize.

However it is a very vivid demonstration of the extent that the tech CEO/VC class and many journalists will pump up the most obvious dire bullshit, and the extent that the broader management/linkedin class will follow along. That's definitely happening today, even thought there is some substance.

Your friend Roose and his writing about Helium is a prime example: he was either cynically promoting a Ponzi scheme, or unbelievably credulous and blinkered, and either way it reflects badly on him and on the NYT. He's embarrassed he was caught out. Is he actually going to do any better? I haven't seen any signs of it.

Expand full comment
The Vaping Trauma Surgeon's avatar

That Roose quote is interesting. I would have thought the web3 "trauma" for tech journalists had come from recognizing the whole thing as a fraud, using their writing to unmask it, and seeing that... not matter at all.

I also have to say I'm a little bemused by how much people lean on the crypto phenomenon to interpret AI/LLMs. I don't think it's wrong to do some meta-analysis of hype cycles, but (at least to my eyes) five minutes with the actual technology is enough to see that LLMs are "real" in a way crypto never was.

Expand full comment
Martin's avatar

I'm sure some journalists had that dispiriting experience of trying to be critical and being ignored, but I don't think Roose was one of them.

He seemed perfectly happy to fluff up the most obvious bullshit: https://reutersinstitute.politics.ox.ac.uk/crucial-questions-every-journalist-should-ask-when-covering-crypto.

Expand full comment
BetweenAtlanticCoasts's avatar

One thing you highlight are the frequent shifts in vibes/consensus. I find it strange how often and how quickly these vibes seems to change 180 degrees (e.g. Europe/America/China/Harris/Trump is doomed, no they are unbeatable.

It helps put things in perspective. For example. Picture an economic collapse this September/October (crashes often seem to happen in fall), causing widespread anger with Sillicon Valley and Republicans. Dems win a large house majority in ‘26, Trump becomes a lame duck and Congressional power is ascendant over presidential power. Very possible and quite the 180 degree shift from the as-is.

Expand full comment
Buzz Andersen's avatar

I think the Crypto and AI and bubbles have a lot in common. Unlike, say, the advent of the iPhone, both involve abstruse, slightly weird technologies whose underpinnings are difficult enough for the general public to conceptualize that gurus and charlatans are able make all sorts of absurd claims about them—claims that are simply accepted and repeated by credulous elites with zero skepticism. This gap in understanding creates plenty of opportunities for unscrupulous boosters to flood the zone with all sorts of absurd, overheated, sci-fi-inflected speculation before the cooler heads that should be calling bullshit (mainstream journalists, regulators) can even wrap their heads around what’s going on. The elite consensus that forms in the absence of any apparent pushback allows a narrative to coalesce around technology X as “the inevitable future,” an unstoppable force that must be embraced if one wishes to avoid irrelevance—and woe betide you if you question the increasingly prodigious resources the technology inherently demands! And all of this continues until the frauds are exposed and the trough of disillusionment finally sets in. But, of course, by that time the infrastructure is built, the damage to society is done, the erosion of any kind of regulatory regime has set in, and most of the early boosters have made their money.

Expand full comment
Jan Jęcz's avatar

You rightfully highlight how fast the pendulum started to swing, as illustrated by Schmidt's pivot. I feel like it ties well with theories such as Anna Kornbluh's "immediacy" or the "incumbent's disadvantage", widely discussed last year in the context of various elections. What I mean is, it seems like there is sort of a tacit crisis of patience: change needs to come quickly, or it is assumed it will never come, consumer economy conditioned people to believe that they can get things as soon as they want, etc.. The rise of GenAI contributed greatly to it, with sales pitches like "why labor over an essay when you can get it done in seconds." But maybe at least some parts of this economy are now falling prey to this problem, not delivering on their promises quickly enough. I know I'm gesturing towards something vague, but I feel like this is not just a case of investors getting cold feet, and the companies struggling to manage their expectations

Expand full comment
Aryeh Cohen-Wade's avatar

Appreciate the neighborhood beer update

Expand full comment
Teddy (T.M.) Brown's avatar

This is all great but the I can confirm as a former bartender (not at that bar but generally) that the draft lines at Alibi are petri dishes. Bottles/cans only there.

Expand full comment
JMG's avatar

We use the G-suite at our organization, and I asked Gemini to put up an out of office for me last month ahead of some time off. It told me it wasn’t able to do that. Until it can complete functions like that, I’m going to be a little wary that it’s the world changing technology that it’s been hyped up to be.

Expand full comment
Hilary's avatar

I think the Schmidt piece actually heralds what's coming next in the space and, spoiler alert, it's not that LLMs or the AI field is going to go away.

Schmidt made a really persuasive argument that public perception of the value of "AI" in China vs the U.S. is almost polar opposite, he argued, because companies in China have focused on finding ways to apply the technology to actual use cases, while most of the valley in the U.S. has been single-mindedly focused on underlying model development as a means of creating "AGI" or "ASI."

The reality is, the chat bots created on top of generalist models are not actual specialized products. They're the epitome of the "jack of all trades, master of none" concept. The closest thing we've seen to application is actually in the SWE space where we've got legitimately useful tooling being developed that speeds up workflows. (Ignore what any bluesky haters tell you about this, it's not a magic wand but these tools are actual time savers for SWEs who care to learn how to use them, and no that one study of 16 open-source developers does not prove otherwise).

By contrast, development of "agentic" products in other spaces has been rather slapdash to date -- which is why most of them either deliver unimpressive results, are too costly, or both. My prediction is that this is going to change over the coming years. Now, we may also see some sort of financial turbulence, but they aren't mutually exclusive outcomes.

Expand full comment
JMG's avatar

This is a great point about how AI is being implemented in the US. It feels like rent seeking to me - all of these companies are saying we’ve embedded AI in our enterprise products and services and it’s going to cost more. Doesn’t matter that it doesn’t work - it’s a way of juicing more revenue out of stable customers.

Expand full comment
Hilary's avatar

My instinct is that part of the reason for this, other than the myopic focus on AGI/ASI from the frontier model labs in the valley, is that within the software world there's long been a bit of a divide between ML/Data Science people and SWE people. The former are often good at processing and analyzing data using python and other tooling (like R and stats packages). The latter are better at building out full products end-to-end. Note I'm lumping in product managers, technical project managers, and designers in with the engineers. For the most part, until now, if you wanted to build any kind of AI integration you'd focus on hiring from the first bucket of people more than the second.

That's fine if all you want is a data-processing pipeline or a fine-tuned model. But just having a pipeline and a non-deterministic model, even one that has been fine-tuned to a specific purpose, isn't going to be enough to build reliable enough products. And the ML-specific people often don't have experience in the types of deterministic software building. In other words, the ML/DS work is necessary but not sufficient.

I think what will happen over the coming years is more people in the second bucket will become used to not only using AI tooling but also creating it. Smart companies will hire from that group of people as well as the first bucket to create their actual integrations, and those integrations will work in the way people expect when asked to pay a premium for a feature.

Expand full comment
Kyle Kukshtel's avatar

You sort of touch on this but “ai skepticism posting” has also become a sort of cottage industry in and of itself. Bluesky skeptics and haters also (maybe subconsciously) are seeing that posting negative AI stuff is a great way to farm likes and RTs, which is sort of important work if you’re trying to build a new audience on a nascent platform.

Expand full comment