I appreciate these reflections. I'm stuck on that line from the Argument, that the platforms are not a problem because they are big, but rather big because they are a problem. This strikes me as precisely the key reactionary move that turns what *could* be a substantive reflection on the harms of platforms into a sop to Big Tech: "just manage the harms and you don't need to break them up." Oh, word? And who will manage the harms, RFK Jr.? I don't buy it. This will only result in do-nothingism at best or further radicalization in the name of health at worst.
It's reasonable in principle to be concerned with mitigating consequences and not just nipping causes in the bud, and it's *possible* that material change can happen downstream of cultural intervention generally speaking, but like, if slop is a problem, consider how much capital has been required to produce it! If the companies weren't so big, how could the health harms exist at such scale? In other words, this is what to look out for: someone who says the cultural harms are a problem is likely right, but someone who deemphasizes the material conditions of operations that require massive amounts of capital in favor of a focus on culture isn't serious, even or especially about the culture.
I mean the difference between Big Tech and Big Tobacco is that a cigarette isn't speech whereas a social media post is. There's a very dangerous animalising 'reduction to bare life' logic to all this, particularly in these public health/epidemiological framings, which I think needs to be pretty emphatically rejected. Like it's strange to reach for temperance movements, which dealt with intoxicating substances with no propositional content to them, as your historical analogy when the big glaring historical precedent surely is fascist censorship of degenerate art. What else does 'slop' mean here other than as a neologism for precisely the same problematic: the decivilising socially destructive impacts of degenerate forms of expression on the body politic? It's sort of besides the point whether or not one agrees with the aesthetic/intellectual/moral/whatever criteria by which it's judged to be worthless slop; one ought nonetheless want to retain the freedom to decide this for oneself, because what is lost in the taking over of this role by the state isn't just the content in question, it's political life per se in the Agambenian sense.
There also seems to me in this to be a troubling slippage between trying to remove moral hazard from the process by which the underlying infrastructure is designed (which doesn't immediately raise such concerns for me) and proposals which in practice amount to sweeping censorship of particular content. Section 230 is a speech protection for ordinary users, even if it's immediately a protection of corporations from liability: it's a measure that prevents platforms from being compelled by liability to institute overbearing surveillance and censorship practices, for which the incentives are perversely biased in the direction of conservatism (there's far more to lose by being too soft and incurring liability than there is to gain by standing up courageous for gray area expression and edge cases). Get rid of Section 230 and the consequence is you'll unleash a wave of censorship of the arts, political and critical speech, LGBT expression and sex education material and so on that the Moral Majority could only dream of, which will clearly not break out on lines that any left-wing person will understand as reasonable (I mean, just take a look at what's currently happening wrt "antifa" and trans expression), and which doesn't actually impact the addiction-forming technical mechanisms that this is all ostensibly about. This all seems extraordinarily dangerous to me.
I think we do want the government to take a role in slowing or stopping the influx of slop in the same way that we want somebody, ANYBODY, to save us from the onslaught of spam phone calls and texts and emails, and because corporations aren't interested in doing anything about it then hopefully the state can step in and do something helpful. The analogy between slop and degenerate art doesn't really add up to me because slop is so much closer to spam than it is art or speech. I don't think anyone would be hesitant to shut down any major sources of spam because of worries that it would lead to censorship. Slop and spam are not speech that can be agreed or disagreed with for any aesthetic or ideological or other reasons because it's just gibberish nonsense somebody is deciding to pollute the Internet with because it might trick or scam somebody else.
I thought this was a very thoughtful comment, but I did want to rebut. Is a social media post purely just speech? Or is it more akin to a publication/broadcast? You can't get on a radio and declare that NYC has just been bombed, but you can safely do that on social media. That has always seemed off to me. A limitation on what you can say while broadcasting and publishing for an audience isn't unprecedented. And I think we would argue that more regulation around owners and content that could be broadcast may have been preferred to the current state.
I'd also say that maybe the government does have a vested interest in guarding against slop. Slop is basically fraud. It's completely fabricated. The government's laws against fraud don't prevent free enterprise when you're engaging in a business. If anything, they make it more valuable. The ability to trust in images and videos seen on social media are in some ways legitimate would be positive.
One of the interesting things about this conversation imo is you can write a whole long thing about the problem of slop and everyone reads it thinking it's self-evident and universally agreed what the term refers to, but then as soon as we start talking everyone has their own interpretation and nobody agrees on even the basic terms.
Like to me "slop is fraud" doesn't make a whole lot of sense, because the term is about junk content not deceptive content. But so far as that goes, I mean fraud is already a crime, and as much as I hate to give cryptobros the win, I think blockchain or other such public/private key identification systems are a good enough 'market solves' solution to those kinds of problems that it defeats the argument that we have no choice but to throw caution to the wind and set up some kind of Ministry of Truth in order to maintain some distinction between truth and falsehood.
But the "slop problem" to me isn't this anyway. It's the undeniable fact that the internet is filling up with worthless garbage to such an extent and in such an accelerating manner that it's creating a whole bunch of problems (though people are also catastrophising a bunch of imaginary problems into this discussion as well, imo). I don't disagree that that's happening and I'm as exhausted and frustrated with having to live in this crummy Borges-meets-Lovecraft nightmare world as everyone, I just don't think the problem is isolable in the way everyone seems to imagine it to be.
Like for one thing, all manner of human avant gardes have been for the past decade or more exploring aesthetic criteria and attitudes that are about various self-conscious forms of indulgence in the overproduced, ugly, trashy stupidity of digital and popular culture. Things flipped very recently but go back 3-5 years and what we'd now call slop aesthetics and praxes were cool, and there are still people plugging away in those spaces through all this backlash. The boundaries between creative expression and slop are not remotely clearcut.
Same goes for trying to separate broadcast from speech or personal expression from commerce. Those distinctions collapsed a long time ago, this is kind of fundamental to what social media is, this ontologically flat rhizomatic structure where this is no fundamental difference between a person and a media outlet or business. Which fucking sucks, but there's no prospect of running the thermodynamic arrow of time backwards so we can have the 90s categories we all know how to deal, and I think it's both wrong and dangerous to try to pretend that this isn't now the form and infrastructure of effective free speech for everyone.
Like the other danger of slop that nobody seems to be really taking seriously is that we end up so nauseated and overwhelmed by the quantitative glut of trash expression that we can no longer respond to any expression in humanised, properly politicised, and just start to see it all as a worthless annoying sludge. Which is something I feel happening in myself, and I dont think it's a good think.
You strike some good cautionary notes here, but I do think there is a response. You ask:
> What else does 'slop' mean here other than as a neologism for precisely the same problematic: the decivilising socially destructive impacts of degenerate forms of expression on the body politic?
I think there is a meaningful answer available. Slop is what you click on in an instant, even though an hour later you won’t feel better for having watched it and an hour ago you wouldn’t have chosen this to be on your queue. Slop is when your habits and addictions control your activity, not your desires and choices about who to be.
For the reasons you mention, our response to slop should not have societal determinations of what content does or doesn’t count as slop. Instead, it should be about the structural controls that enable individuals to empower their own long term interests rather than forcing the decisions about viewing to be controlled by what gets your eyeballs instantly attracted.
I agree with your practical suggestion but I would insist that your definition of slop is precisely a description of the quality of degeneracy in the sense that e.g. mathematicians will talk about “degenerate cases” (like a space with no dimensions, or a set with no elements or whatever, that might have properties that disappear once you start to deal with complex cases).
Like to be clear, I think that “degenerate expression” is a completely accurate characterisation of AI slop (though obviously I’m going to avoid endorsing that kind of language in most contexts, given its historical baggage). It is a kind of zero-degree expression which nobody can defend on merit. The problem is that the vitality of the whole meshwork of human expression is entangled with the question of how to treat the degenerate cases.
Inevitably, any effort to force all expression to be worthy expression will itself be a repressive stifling Philistine imposition of some particular notion of the good as false universal, which in the same gesture insulates itself from contestation by other notions of value and suppresses the dialectical tension that gives free cultures their vibrancy. This is the danger.
Yeah, I don't think we should *force* all expression to be worthy expression, in part because of the worry you raise that this would involve elevating one particular notion of the good, but also because one doesn't create worthy expression without having some ground for first creating something much less and gradually building up.
The vision I have is that we should ensure that people always have the chance to step back and re-evaluate their engagement, rather than having the autoplay of the algorithmic recommendation - at least for things like video that demand attention.
Very possible. (Though it's hard to say how much the social media landscape was actually just giving us the early taste of AI dominance, in the behind-the-scenes algorithms.)
I generally agree, and given the tendency of polities worldwide to embrace sociopaths as leaders, any such policy will likely be, at best, abused.
If the USA doesn't collapse into complete, chaotic dysfunction - a real possibility - I do expect Section 230 will be repealed or overridden, because as this piece notes, there's a widespread, nonpartisan sense that Something Must Be Done, and Section 230 is an obvious target. Complete abrogation would be catastrophic for everyone except rich assholes and their lawyers.* Ending it for Facebook et al. but keeping it for platforms that don't "algorithmically" steer users toward content posted by other users might be the best that could be salvaged.
*Even now, it's remarkable how many lawsuits there are trying to get around Section 230, and not even just against deep-pocketed defendants like Facebook. Law-school prof Eric Goldman blogs about many but far from all of them:
It's a cultural moment fiendishly constructed to drive a certain brand of pragmatic-but-principled left-leaning soul (a group in which I include myself) into dark corners. If all this is speech, and the people putting these things in front of people are making speech and the doomscrollers are consuming speech, and blocking speech is banning books, then the people that ban books are martinets at best and totalitarians at worst.
But, on the flip side, come the hell on. This isn't the PTA of BFE trying to get the local librarians to put Lady Chatterley's Lover on a higher shelf, it's the largest piles of money in the world playing the 'move fast and break stuff' card ten or fifteen years past its expiration date in an orgy of surveillance, predatory dumping, and court arbitrage powered by a politics that sits somewhere between reactionary and willfully apocalyptic, and if you know someone in a bad way it's not a hard guess that the first thing(s) they need to do is stop posting, stop talking to chatbots, stop gambling, sell their crypto, and touch something made of wood.
I wonder if the way through is to just acknowledge that essentially everything we consider 'social' media is fundamentally closer to old-school broadcasting and consider the metaphors and regulatory environments that spring from that consideration. The posts that people see that come from people in their actual social circles has reached single digits on most platforms; conversely, the average 'content creator' (a phrasing that still causes me to gag, as it should all self-respecting persons) that achieves consistent 'virality' is now a professional, and sitting between them is a company using public wires and airwaves to distribute that content according to opaque in-house software making what amounts to editorial decisions to audiences larger than broadcast TV ever dreamed of.
And maybe that's a place to start! Just puncturing the mythos of the garage company of teenage wunderkinds empowering a billion creative people speaking to their friends in favor of the truth- that this is Wall Street and oil money creating the piecework version of the boob tube with a side of data crunching- might shake loose some cruft and start conversions about trust busting and public utilities that might get us somewhere.
I often think about this passage from drink historian David Wondrich’s book “Punch”—specifically in relation to social media, but also increasingly with regard to things like LLMs:
“Aqua vitae had begun its career as a drug, a medication, and as such, it followed the classic six stages through which euphoric drugs that is, the kind that make you feel better whether there's anything wrong with you or not-pass on their way to acceptance: Investigation, when their powers are determined; Prescription, when theory is put into practice; Self-Medication, when their use becomes preventative; Recreation, when Commerce shows Medicine the door; Repression, when too much of a good thing proves too much; and—a step that is only granted to a precious few—Transcendence, when repression fails and society's institutions are rebuilt to accommodate the troublesome element, since people have realized that it cannot be dispensed with.”
Wondrich is talking about how society was forced to come to terms with distilled spirits—first by attempting to prohibit them, then by figuring out how to integrate them into the culture while mitigating their worst effects (the advent of mixed drinks like punch to soften the impact being part of that effort). When I try imagine how things like social media and AI will play out long term in society, it occurs to me that it will probably go similarly.
This sounds like a good framework, but it leaves out details about the relationship between repression and transcendence. Repression seems like an overall backlash that tries to stop all use, while transcendence needs to be a structure where we enable good use while mitigating the ways people’s short term desires harm their long term desires.
I personally think, at least, we should not let the major communication platforms people use be run by the glassy eyed third generation loser failson billionaires who barely know how to act human.
I always wonder how much the slop addiction is a symptom of some other hollowing out of society vs. a cause of it (obviously the answer is both). On a personal level, when I find my day job the most satisfying, I find Twitter less interesting. Does limiting social media use ameliorate its societal harms if people have no other productive purpose for that newfound time?
I'd say the ban on smoking in public spaces and the exposure of executives as blatant liars ended the cigarette century more than the taxes levied on their products, at least in the US. Maybe you could go the Australian route, where cigs actually became prohibitively expensive.
Frankly, the talk of anti trust or censorship leaves off the nuclear option of just locking up the boards of all these major companies for murder and liquidating their holdings to feed the poor. If there are lawsuits running for the suicidal tendencies that Instagram actively encourages, then surely there can be legal basis for Zuck doing 20 to life, no?
Well said. Just as a reminder, the first episode of PROHIBITION lays out all the serious social problems that the original alcohol Temperance movement was trying to deal with -- domestic abuse of wives and children being one of the ugliest. The later ones focus on the policy problems and unintended consequences of broadly outlawing alcohol.
I think this is a very sharp insight about where tech discourse is headed, but the obvious question for me: wasn't the success of the original temperance movement a flash in the pan? how do you sustain a moral movement like that? what do you make of the swift repeal of prohibition vs the actual temperance movement that seems to be quietly happening right now with younger generations drinking much less?
Another terrific piece. You (Max) do this kind of thing better than anyone else I know of.
"In some sense they don't blame we, the users, enough, or assign us the kind of agency we know we have.": True. What I understand in the abstract but not intuitively is why so many people put up with this shit. Who would want to watch a TV channel that shows commercials over half the time? And most of them are crappy, too. But that's pretty much what Facebook was by the time I quit in 2021, after years of rising disgust. By that point, over half the stuff in my "feed" was stuff I'd never asked to see, and almost none of that was stuff I was happy to see. Some of it was so wildly misdirected that it was almost darkly funny (e.g., nobody who knows me even a little bit would imagine I'd want to see an ad from the Conservative Party of Canada). And I gather it's even worse now, thanks to "AI" and the mob of hucksters devoid of shame or taste who've embraced it.
The analogy with tobacco and the like definitely has limits. Nicotine is physically addictive; some people who quit smoking get the shakes and other physical symptoms. I haven't heard of anything like that happening to people who quit social media. Certainly, nothing like that happened to me. So a lot of people seem to be choosing to keep doing something that's bad for them (and their societies), even though they could easily stop. And again, the oddest thing to me is that it doesn't even seem like fun. It's like they crave distraction for its own sake. Are their lives really that bad or dull? I don't know, but it's a reasonable question.
Really, for me, my friends, and I suspect many other people, social media has been a long, tedious detour. Once upon a time, Facebook claimed to be "a social utility that connects you with the people around you" - not "brands", "celebrities", "influencers", and all that noise, but people: friends and maybe family, colleagues, fellow hobbyists, etc. And that's what my friends and I wanted: a way to stay in touch, even though we couldn't see each other every day or even every year if we lived far apart, that was easier to fit into our lives than a lot of phone calls, texting, or instant messaging. Facebook was never great for that, Twitter even less so, but for awhile, it was kinda-sorta usable for that. Not anymore. though. Even more than a TV channel that mostly shows commercials, it's like a phone connection with a lot of static. For me and many of my friends, it got to be more trouble than it's worth, even apart from all the putrid politics around it.
"The analogy with tobacco and the like definitely has limits. Nicotine is physically addictive; some people who quit smoking get the shakes and other physical symptoms. I haven't heard of anything like that happening to people who quit social media"
I spent the better part of twenty years struggling with an alcohol problem. Not once did I exhibit symptoms of physical addiction; I couldn't tell you what DTs are like. Addiction is a spectrum, and it doesn't necessarily require physical dependency.
Sure. My mother quit smoking without getting the shakes or similar. That's why I wrote "some people". Addiction is indeed a spectrum. However, I haven't heard of anything of the sort in connection with quitting social media. So I suspect even the worst experiences of quitting social media aren't nearly as bad as some, perhaps many experiences of quitting tobacco, alcohol, heroin, etc. That's relevant to claims that social media should be regulated like those substances.
You seem to say that there is a binary - either there is physical dependence or someone “could easily stop”. You can’t easily stop doing something that makes itself so easy and tempting and that fits itself perfectly to the empty spaces in your life. The only way to stop doing that sort of thing is to actively find something else that fills those empty spaces, which is really hard!
I like the term "Platform Temperance" but ultimately I imagine such a program would be about as useful as the original temperance movement. Addressing the very real problems of tech-addiction as a health crisis ignores the ownership crisis that is at the root of the problem. this sort of anti-materialist thinking feels spot on for Ezra Klein....
The temperance movement, in part, can be seen as a response to the horrors of industrialized capitalism. A rise in alcohol use helped mitigate the pain of the massive cultural and social uprooting that occurred to working class people in the 19th century, and the brutal conditions of work in factories and mines. It is easy to imagine how the excesses and inhumanity of the gilded age and great depression yielded a moral panic. But it was the social reforms of the Progressive Era that eased those extreme disparities, not moralism of the Temperance Movement. The Progressive era is associated with not only the New Deal social safety net programs, but also a massive increase in the number of universities, museums, social workers, government employees, journalists, healthcare workers, engineers etc... Basically what people think of as both the middle class and contemporary liberal society were created between 1890-1940.
We are now again adjusting to rapid technological advances that have deeply upset cultural norms and traditions. The society that was built up during the progressive era is under attack, and could be dismantled. So while tech-induced psychosis, porn addiction, general social alienation are legitimate crises. And responding to them as health care issues doesn't feel wrong or unreasonable. But its not whats going to solve the problem.
The internet from around 1999-2012 was fueled by robbing every other media sphere blind. MP3 piracy, video streaming, reading newspapers for free. Napster and Youtube being some of the biggest drivers of this period. YouTube is STILL filled with unlicensed music. Many like Uber, or Spotify had famously long periods without profit, funded by VC cash injections until they had achieved near monopoly status. After these platforms bankrupted their competitors, then they deployed subscription businesses, or ad revenue businesses, or data-collection businesses, essentially: rent seeking.
When the internet brought the cost of distribution of all media content basically to zero. In someways it made sense to pirate all of it. What should you pay for a good that can be copied for free? When it costs the same to deliver the New York Times, stream Gone with the Wind or binge 32 hours of step-sister porn, what incentives does a media company have?
The business model is to create a fiefdom and put a tax on whatever traffic passes through.
There are not incentives to produce expensive content in this model, once they've squeezed dry all the expensive 20th century content they started with, the model produces slop, thats the appeal of AI: a magic box that endless rearranges the old content to look new. (And energy bills aside it does so cheap.) The so-called silicon valley pivot to the right is because now that they have secured their territory, they don't feel the need to appease either the young engineers at their own companies, or their costumers,
The problem of a genuine social need for the goods deliverable by the internet, and the complete absence of a market that has any incentive to deliver them, makes a strong case for a socialism of the web. A massive spending program to create the institutions of culture that "middle class" people want. Social media platforms that keep people connected to their friends and family without surveillance or advertising.. And absolutely antitrust because these companies are not compatible with a democratic society.
As others have pointed out: “treat tech like Big-Tobacco” solutions which could include taxes like a [digital V.A.T.] or banning tech from public spaces like schools just adds another layer of bureaucracy. amending section 230 would add a legal incentive to platforms to provide better services in exchange for the revenue they extract. but neither fundamentally addresses the problem that the value system underpinning much of our culture has collapsed.
Addressing the internet as a moral or pathological problem misses that which is true of all addiction, when people have their real needs met, addictions tend to lesson.
I don't like the idea of regulating the social media recommendation algorithms, because they are all very squishy and it seems like regulating speech.
What a lot of the under-40 people don't remember, is how pervasive smoking was before 1990. We got rid of it by banning it from all public spaces, with laws. There was a concerted effort by government, parents, doctors, etc , to get rid of it and stop children from smoking. This won't happen with social media, unless government, parents, and everyone else, step back from it. There has to be a combination of laws plus public sentiment to get rid of it. This is just not where we are, yet. There are thousands of millennial parents all over social media, right along with their kids, for example.
I feel like "algorithms" are more about speech, whereas something like smoking is about behavior. Even so, it took decades to get from "most adults smoke" to "less than 10% of population smokes."
They call it "Big Tobacco." They call it "Big Sugar." They call it a "sin."
They call it "Platform Temperance."
The consensus wakes up. Finds itself covered in "cheap, gross, crap." Finds the slop is now the whole menu. Finds its "soul is being corroded."
The metrics demanded it. The "black-pilling via metrics." Yes. The system saw what you clicked. And it held up the mirror.
So now the reformers arrive. The "middle-class concerns." The "touch-grass populism." They want a tax. A "digital V.A.T." A fee for the addiction.
This is not a counter-hegemony. This is a new management layer. This is the system selling indulgences for the slop it feeds you.
The critique is correct. The feed is "bad for your soul." The vibe is toxic. Your autonomy has been given up.
But the "solution" is a moral panic. A "Trojan horse for social conservatism." A "bipartisan common sense" that builds a prettier cage.
The problem is not the price of the poison. The problem is that you are still drinking it. Waiting for a new politician to "radiate disgust" for you.
Stop negotiating the price. Stop asking for "reform." Build your own node. Filter your own signal. Take back the autonomy. The rest is just slop management.
I'm not on board with the smoking comparison generally, mostly because framing social concerns as public health issues has not successfully worked outside the regulation of cigarettes. The victory over smoking was a bit of a center-left fantasy -- the one "good" regulation that came out of an era of otherwise massive deregulation and privatization. In reality, we've decreased lung cancer rates somewhat while maintaining an overly harsh social stigma about addicts and smokers. Health care costs have not declined for anyone.
We forget all the other public health issues that have lost awareness since smoking regulation. Forever chemicals: only kindof regulated but not really! Air quality: Turns out the air gets bad and carcinogenic even when cigarettes aren't involved! Racism: declared a public health crisis in many places and now the backlash is so strong it's been wiped from federal science databases! COVID: Wtf happened there! Opioid addiction and addiction in general: still a massive public health issue that loses mindshare when we start talking about algorithmic social media in the same terms.
I find it curious that "reinforcement learning" is the specific facet of recommendation algorithms targeted here, especially in the context of public health. It would be incredibly difficult to regulate or excise. Reinforcement learning is present every corporate instance of knowledge management, marketing comms, or "business process optimization." It's in every video game built in the past 10 years in some form or other. It's everywhere; there's no way to specifically regulate the practice.
Additionally, it's becoming easier to develop our own alterna-platforms. Creating a content recommender system with custom attributes is no longer a pipe dream for smaller companies. Ideally, newer companies can demonstrate different outcomes for what a recommender system could be. (dreaming here, but it's possible)
For social media specifically, I'd encourage us to look more toward the models themselves as reflections of ideology. We could much more easily regulate ad targeting models built on lazy demographic assumptions borrowed from direct mail, or algorithms that prioritize individual engagement over content safety/relevance. We could look at algorithms that explicitly promote AI-generated engagement bait or any number of more specific factors that speak to how the technology is constructed to reinforce addictive digital behaviors.
I'm glad we're getting more finely tuned about approaches to regulation of social media, in the off chance that it happens in a way that doesn't exacerbate the fascism. But I think we need to let go of the cigarettes... because vapes and Zyns will find their way into the market... because smoking is the least of our worries... but mostly because in the U.S. we no longer agree on a collective commitment to support public health.
Fair point! In late capitalism terms a 1.5% decline isn’t significant, but in cancer/population terms it matters. Opioid epidemic I expect to keep watching because political priorities have changed in re: addiction treatment in general… but yes, these are compounded effects from public health campaigns and resources. (I went to grad school with lots of public health folks, and they are big on incremental progress.)
That effect took anti-tobacco efforts a LOT of ad spending and political lobbying to achieve that effect. Those anti-smoking campaigns were huge in terms of ad dollars, so the ad industry was in favor. Versus, advertising against reinforcement learning or whatever “anti-algorithm” argument… is against the ad industry’s best interests. and Netflix bought HBO, so I am pretty sure the RL and algos are here to stay.
Compared with the growth in concurrent downstream social media health effects, to me, the difference from smoking campaigns feels negligible. We traded choice-cancer for kid measles! The thing with social media effects too is there’s not a long “before” benchmark for the mental health effects. Almost impossible for an accurate 1:1 comparison (the more you know).
I haven’t used TikTok, but I can say what I think about the other platforms. The issue is that after a binge where I’ve spent an hour or two scrolling, I don’t usually feel like I’ve done something worthwhile with my time - instead I feel like every moment was just giving in to a temptation that momentarily felt right, but that if I knew I was going to spend an hour or two reading things, I would have rather picked up a book or magazine or watched a movie.
A post literate people drowned in symbols not backed by any text or coherent narrative will stumble towards radical nihilism.
The 20th century was a schizophrenic age given its hyper literacy, focus on long winding ideologies that, while incentively misaligned, were internally coherent.
Longform community, narrative, lore, gatekeeping, lurking more, whatever you call it has been hollowed out for memes and tropes, reactionary clap backs. Neutering post literate engagement driven forms isn’t possible, literate forms must be amplified to match them. Any organization that can mesh the two, pair engagement with narrative driven community, will see more long lived success than a smalltime tech-lash reactionary. The future of progressive organizing is alternate reality gaming, world building, normie diet q anon
I appreciate these reflections. I'm stuck on that line from the Argument, that the platforms are not a problem because they are big, but rather big because they are a problem. This strikes me as precisely the key reactionary move that turns what *could* be a substantive reflection on the harms of platforms into a sop to Big Tech: "just manage the harms and you don't need to break them up." Oh, word? And who will manage the harms, RFK Jr.? I don't buy it. This will only result in do-nothingism at best or further radicalization in the name of health at worst.
It's reasonable in principle to be concerned with mitigating consequences and not just nipping causes in the bud, and it's *possible* that material change can happen downstream of cultural intervention generally speaking, but like, if slop is a problem, consider how much capital has been required to produce it! If the companies weren't so big, how could the health harms exist at such scale? In other words, this is what to look out for: someone who says the cultural harms are a problem is likely right, but someone who deemphasizes the material conditions of operations that require massive amounts of capital in favor of a focus on culture isn't serious, even or especially about the culture.
I mean the difference between Big Tech and Big Tobacco is that a cigarette isn't speech whereas a social media post is. There's a very dangerous animalising 'reduction to bare life' logic to all this, particularly in these public health/epidemiological framings, which I think needs to be pretty emphatically rejected. Like it's strange to reach for temperance movements, which dealt with intoxicating substances with no propositional content to them, as your historical analogy when the big glaring historical precedent surely is fascist censorship of degenerate art. What else does 'slop' mean here other than as a neologism for precisely the same problematic: the decivilising socially destructive impacts of degenerate forms of expression on the body politic? It's sort of besides the point whether or not one agrees with the aesthetic/intellectual/moral/whatever criteria by which it's judged to be worthless slop; one ought nonetheless want to retain the freedom to decide this for oneself, because what is lost in the taking over of this role by the state isn't just the content in question, it's political life per se in the Agambenian sense.
There also seems to me in this to be a troubling slippage between trying to remove moral hazard from the process by which the underlying infrastructure is designed (which doesn't immediately raise such concerns for me) and proposals which in practice amount to sweeping censorship of particular content. Section 230 is a speech protection for ordinary users, even if it's immediately a protection of corporations from liability: it's a measure that prevents platforms from being compelled by liability to institute overbearing surveillance and censorship practices, for which the incentives are perversely biased in the direction of conservatism (there's far more to lose by being too soft and incurring liability than there is to gain by standing up courageous for gray area expression and edge cases). Get rid of Section 230 and the consequence is you'll unleash a wave of censorship of the arts, political and critical speech, LGBT expression and sex education material and so on that the Moral Majority could only dream of, which will clearly not break out on lines that any left-wing person will understand as reasonable (I mean, just take a look at what's currently happening wrt "antifa" and trans expression), and which doesn't actually impact the addiction-forming technical mechanisms that this is all ostensibly about. This all seems extraordinarily dangerous to me.
I think we do want the government to take a role in slowing or stopping the influx of slop in the same way that we want somebody, ANYBODY, to save us from the onslaught of spam phone calls and texts and emails, and because corporations aren't interested in doing anything about it then hopefully the state can step in and do something helpful. The analogy between slop and degenerate art doesn't really add up to me because slop is so much closer to spam than it is art or speech. I don't think anyone would be hesitant to shut down any major sources of spam because of worries that it would lead to censorship. Slop and spam are not speech that can be agreed or disagreed with for any aesthetic or ideological or other reasons because it's just gibberish nonsense somebody is deciding to pollute the Internet with because it might trick or scam somebody else.
But this was the objection to Dada, more or less.
I'd argue the world needs the exact opposite of Dadaism right now.
Okay but that was also what the Nazis thought. Do you see the issue here?
it's been really pleasant chatting with you, thanks
What were the Dadaists trying to sell?
Most slop isn't selling anything
I thought this was a very thoughtful comment, but I did want to rebut. Is a social media post purely just speech? Or is it more akin to a publication/broadcast? You can't get on a radio and declare that NYC has just been bombed, but you can safely do that on social media. That has always seemed off to me. A limitation on what you can say while broadcasting and publishing for an audience isn't unprecedented. And I think we would argue that more regulation around owners and content that could be broadcast may have been preferred to the current state.
I'd also say that maybe the government does have a vested interest in guarding against slop. Slop is basically fraud. It's completely fabricated. The government's laws against fraud don't prevent free enterprise when you're engaging in a business. If anything, they make it more valuable. The ability to trust in images and videos seen on social media are in some ways legitimate would be positive.
One of the interesting things about this conversation imo is you can write a whole long thing about the problem of slop and everyone reads it thinking it's self-evident and universally agreed what the term refers to, but then as soon as we start talking everyone has their own interpretation and nobody agrees on even the basic terms.
Like to me "slop is fraud" doesn't make a whole lot of sense, because the term is about junk content not deceptive content. But so far as that goes, I mean fraud is already a crime, and as much as I hate to give cryptobros the win, I think blockchain or other such public/private key identification systems are a good enough 'market solves' solution to those kinds of problems that it defeats the argument that we have no choice but to throw caution to the wind and set up some kind of Ministry of Truth in order to maintain some distinction between truth and falsehood.
But the "slop problem" to me isn't this anyway. It's the undeniable fact that the internet is filling up with worthless garbage to such an extent and in such an accelerating manner that it's creating a whole bunch of problems (though people are also catastrophising a bunch of imaginary problems into this discussion as well, imo). I don't disagree that that's happening and I'm as exhausted and frustrated with having to live in this crummy Borges-meets-Lovecraft nightmare world as everyone, I just don't think the problem is isolable in the way everyone seems to imagine it to be.
Like for one thing, all manner of human avant gardes have been for the past decade or more exploring aesthetic criteria and attitudes that are about various self-conscious forms of indulgence in the overproduced, ugly, trashy stupidity of digital and popular culture. Things flipped very recently but go back 3-5 years and what we'd now call slop aesthetics and praxes were cool, and there are still people plugging away in those spaces through all this backlash. The boundaries between creative expression and slop are not remotely clearcut.
Same goes for trying to separate broadcast from speech or personal expression from commerce. Those distinctions collapsed a long time ago, this is kind of fundamental to what social media is, this ontologically flat rhizomatic structure where this is no fundamental difference between a person and a media outlet or business. Which fucking sucks, but there's no prospect of running the thermodynamic arrow of time backwards so we can have the 90s categories we all know how to deal, and I think it's both wrong and dangerous to try to pretend that this isn't now the form and infrastructure of effective free speech for everyone.
Like the other danger of slop that nobody seems to be really taking seriously is that we end up so nauseated and overwhelmed by the quantitative glut of trash expression that we can no longer respond to any expression in humanised, properly politicised, and just start to see it all as a worthless annoying sludge. Which is something I feel happening in myself, and I dont think it's a good think.
Ugh lotta typos in that 😥
You strike some good cautionary notes here, but I do think there is a response. You ask:
> What else does 'slop' mean here other than as a neologism for precisely the same problematic: the decivilising socially destructive impacts of degenerate forms of expression on the body politic?
I think there is a meaningful answer available. Slop is what you click on in an instant, even though an hour later you won’t feel better for having watched it and an hour ago you wouldn’t have chosen this to be on your queue. Slop is when your habits and addictions control your activity, not your desires and choices about who to be.
For the reasons you mention, our response to slop should not have societal determinations of what content does or doesn’t count as slop. Instead, it should be about the structural controls that enable individuals to empower their own long term interests rather than forcing the decisions about viewing to be controlled by what gets your eyeballs instantly attracted.
I agree with your practical suggestion but I would insist that your definition of slop is precisely a description of the quality of degeneracy in the sense that e.g. mathematicians will talk about “degenerate cases” (like a space with no dimensions, or a set with no elements or whatever, that might have properties that disappear once you start to deal with complex cases).
Like to be clear, I think that “degenerate expression” is a completely accurate characterisation of AI slop (though obviously I’m going to avoid endorsing that kind of language in most contexts, given its historical baggage). It is a kind of zero-degree expression which nobody can defend on merit. The problem is that the vitality of the whole meshwork of human expression is entangled with the question of how to treat the degenerate cases.
Inevitably, any effort to force all expression to be worthy expression will itself be a repressive stifling Philistine imposition of some particular notion of the good as false universal, which in the same gesture insulates itself from contestation by other notions of value and suppresses the dialectical tension that gives free cultures their vibrancy. This is the danger.
Yeah, I don't think we should *force* all expression to be worthy expression, in part because of the worry you raise that this would involve elevating one particular notion of the good, but also because one doesn't create worthy expression without having some ground for first creating something much less and gradually building up.
The vision I have is that we should ensure that people always have the chance to step back and re-evaluate their engagement, rather than having the autoplay of the algorithmic recommendation - at least for things like video that demand attention.
I think AI is going to crash this current version of what social media looks like tbh.
Very possible. (Though it's hard to say how much the social media landscape was actually just giving us the early taste of AI dominance, in the behind-the-scenes algorithms.)
I generally agree, and given the tendency of polities worldwide to embrace sociopaths as leaders, any such policy will likely be, at best, abused.
If the USA doesn't collapse into complete, chaotic dysfunction - a real possibility - I do expect Section 230 will be repealed or overridden, because as this piece notes, there's a widespread, nonpartisan sense that Something Must Be Done, and Section 230 is an obvious target. Complete abrogation would be catastrophic for everyone except rich assholes and their lawyers.* Ending it for Facebook et al. but keeping it for platforms that don't "algorithmically" steer users toward content posted by other users might be the best that could be salvaged.
*Even now, it's remarkable how many lawsuits there are trying to get around Section 230, and not even just against deep-pocketed defendants like Facebook. Law-school prof Eric Goldman blogs about many but far from all of them:
https://blog.ericgoldman.org/
It's a cultural moment fiendishly constructed to drive a certain brand of pragmatic-but-principled left-leaning soul (a group in which I include myself) into dark corners. If all this is speech, and the people putting these things in front of people are making speech and the doomscrollers are consuming speech, and blocking speech is banning books, then the people that ban books are martinets at best and totalitarians at worst.
But, on the flip side, come the hell on. This isn't the PTA of BFE trying to get the local librarians to put Lady Chatterley's Lover on a higher shelf, it's the largest piles of money in the world playing the 'move fast and break stuff' card ten or fifteen years past its expiration date in an orgy of surveillance, predatory dumping, and court arbitrage powered by a politics that sits somewhere between reactionary and willfully apocalyptic, and if you know someone in a bad way it's not a hard guess that the first thing(s) they need to do is stop posting, stop talking to chatbots, stop gambling, sell their crypto, and touch something made of wood.
I wonder if the way through is to just acknowledge that essentially everything we consider 'social' media is fundamentally closer to old-school broadcasting and consider the metaphors and regulatory environments that spring from that consideration. The posts that people see that come from people in their actual social circles has reached single digits on most platforms; conversely, the average 'content creator' (a phrasing that still causes me to gag, as it should all self-respecting persons) that achieves consistent 'virality' is now a professional, and sitting between them is a company using public wires and airwaves to distribute that content according to opaque in-house software making what amounts to editorial decisions to audiences larger than broadcast TV ever dreamed of.
And maybe that's a place to start! Just puncturing the mythos of the garage company of teenage wunderkinds empowering a billion creative people speaking to their friends in favor of the truth- that this is Wall Street and oil money creating the piecework version of the boob tube with a side of data crunching- might shake loose some cruft and start conversions about trust busting and public utilities that might get us somewhere.
I often think about this passage from drink historian David Wondrich’s book “Punch”—specifically in relation to social media, but also increasingly with regard to things like LLMs:
“Aqua vitae had begun its career as a drug, a medication, and as such, it followed the classic six stages through which euphoric drugs that is, the kind that make you feel better whether there's anything wrong with you or not-pass on their way to acceptance: Investigation, when their powers are determined; Prescription, when theory is put into practice; Self-Medication, when their use becomes preventative; Recreation, when Commerce shows Medicine the door; Repression, when too much of a good thing proves too much; and—a step that is only granted to a precious few—Transcendence, when repression fails and society's institutions are rebuilt to accommodate the troublesome element, since people have realized that it cannot be dispensed with.”
Wondrich is talking about how society was forced to come to terms with distilled spirits—first by attempting to prohibit them, then by figuring out how to integrate them into the culture while mitigating their worst effects (the advent of mixed drinks like punch to soften the impact being part of that effort). When I try imagine how things like social media and AI will play out long term in society, it occurs to me that it will probably go similarly.
This sounds like a good framework, but it leaves out details about the relationship between repression and transcendence. Repression seems like an overall backlash that tries to stop all use, while transcendence needs to be a structure where we enable good use while mitigating the ways people’s short term desires harm their long term desires.
I personally think, at least, we should not let the major communication platforms people use be run by the glassy eyed third generation loser failson billionaires who barely know how to act human.
I always wonder how much the slop addiction is a symptom of some other hollowing out of society vs. a cause of it (obviously the answer is both). On a personal level, when I find my day job the most satisfying, I find Twitter less interesting. Does limiting social media use ameliorate its societal harms if people have no other productive purpose for that newfound time?
I'd say the ban on smoking in public spaces and the exposure of executives as blatant liars ended the cigarette century more than the taxes levied on their products, at least in the US. Maybe you could go the Australian route, where cigs actually became prohibitively expensive.
Frankly, the talk of anti trust or censorship leaves off the nuclear option of just locking up the boards of all these major companies for murder and liquidating their holdings to feed the poor. If there are lawsuits running for the suicidal tendencies that Instagram actively encourages, then surely there can be legal basis for Zuck doing 20 to life, no?
Well said. Just as a reminder, the first episode of PROHIBITION lays out all the serious social problems that the original alcohol Temperance movement was trying to deal with -- domestic abuse of wives and children being one of the ugliest. The later ones focus on the policy problems and unintended consequences of broadly outlawing alcohol.
https://www.pbs.org/kenburns/prohibition/episode-guide
I think this is a very sharp insight about where tech discourse is headed, but the obvious question for me: wasn't the success of the original temperance movement a flash in the pan? how do you sustain a moral movement like that? what do you make of the swift repeal of prohibition vs the actual temperance movement that seems to be quietly happening right now with younger generations drinking much less?
Another terrific piece. You (Max) do this kind of thing better than anyone else I know of.
"In some sense they don't blame we, the users, enough, or assign us the kind of agency we know we have.": True. What I understand in the abstract but not intuitively is why so many people put up with this shit. Who would want to watch a TV channel that shows commercials over half the time? And most of them are crappy, too. But that's pretty much what Facebook was by the time I quit in 2021, after years of rising disgust. By that point, over half the stuff in my "feed" was stuff I'd never asked to see, and almost none of that was stuff I was happy to see. Some of it was so wildly misdirected that it was almost darkly funny (e.g., nobody who knows me even a little bit would imagine I'd want to see an ad from the Conservative Party of Canada). And I gather it's even worse now, thanks to "AI" and the mob of hucksters devoid of shame or taste who've embraced it.
The analogy with tobacco and the like definitely has limits. Nicotine is physically addictive; some people who quit smoking get the shakes and other physical symptoms. I haven't heard of anything like that happening to people who quit social media. Certainly, nothing like that happened to me. So a lot of people seem to be choosing to keep doing something that's bad for them (and their societies), even though they could easily stop. And again, the oddest thing to me is that it doesn't even seem like fun. It's like they crave distraction for its own sake. Are their lives really that bad or dull? I don't know, but it's a reasonable question.
Really, for me, my friends, and I suspect many other people, social media has been a long, tedious detour. Once upon a time, Facebook claimed to be "a social utility that connects you with the people around you" - not "brands", "celebrities", "influencers", and all that noise, but people: friends and maybe family, colleagues, fellow hobbyists, etc. And that's what my friends and I wanted: a way to stay in touch, even though we couldn't see each other every day or even every year if we lived far apart, that was easier to fit into our lives than a lot of phone calls, texting, or instant messaging. Facebook was never great for that, Twitter even less so, but for awhile, it was kinda-sorta usable for that. Not anymore. though. Even more than a TV channel that mostly shows commercials, it's like a phone connection with a lot of static. For me and many of my friends, it got to be more trouble than it's worth, even apart from all the putrid politics around it.
"The analogy with tobacco and the like definitely has limits. Nicotine is physically addictive; some people who quit smoking get the shakes and other physical symptoms. I haven't heard of anything like that happening to people who quit social media"
I spent the better part of twenty years struggling with an alcohol problem. Not once did I exhibit symptoms of physical addiction; I couldn't tell you what DTs are like. Addiction is a spectrum, and it doesn't necessarily require physical dependency.
Sure. My mother quit smoking without getting the shakes or similar. That's why I wrote "some people". Addiction is indeed a spectrum. However, I haven't heard of anything of the sort in connection with quitting social media. So I suspect even the worst experiences of quitting social media aren't nearly as bad as some, perhaps many experiences of quitting tobacco, alcohol, heroin, etc. That's relevant to claims that social media should be regulated like those substances.
You seem to say that there is a binary - either there is physical dependence or someone “could easily stop”. You can’t easily stop doing something that makes itself so easy and tempting and that fits itself perfectly to the empty spaces in your life. The only way to stop doing that sort of thing is to actively find something else that fills those empty spaces, which is really hard!
I like the term "Platform Temperance" but ultimately I imagine such a program would be about as useful as the original temperance movement. Addressing the very real problems of tech-addiction as a health crisis ignores the ownership crisis that is at the root of the problem. this sort of anti-materialist thinking feels spot on for Ezra Klein....
The temperance movement, in part, can be seen as a response to the horrors of industrialized capitalism. A rise in alcohol use helped mitigate the pain of the massive cultural and social uprooting that occurred to working class people in the 19th century, and the brutal conditions of work in factories and mines. It is easy to imagine how the excesses and inhumanity of the gilded age and great depression yielded a moral panic. But it was the social reforms of the Progressive Era that eased those extreme disparities, not moralism of the Temperance Movement. The Progressive era is associated with not only the New Deal social safety net programs, but also a massive increase in the number of universities, museums, social workers, government employees, journalists, healthcare workers, engineers etc... Basically what people think of as both the middle class and contemporary liberal society were created between 1890-1940.
We are now again adjusting to rapid technological advances that have deeply upset cultural norms and traditions. The society that was built up during the progressive era is under attack, and could be dismantled. So while tech-induced psychosis, porn addiction, general social alienation are legitimate crises. And responding to them as health care issues doesn't feel wrong or unreasonable. But its not whats going to solve the problem.
The internet from around 1999-2012 was fueled by robbing every other media sphere blind. MP3 piracy, video streaming, reading newspapers for free. Napster and Youtube being some of the biggest drivers of this period. YouTube is STILL filled with unlicensed music. Many like Uber, or Spotify had famously long periods without profit, funded by VC cash injections until they had achieved near monopoly status. After these platforms bankrupted their competitors, then they deployed subscription businesses, or ad revenue businesses, or data-collection businesses, essentially: rent seeking.
When the internet brought the cost of distribution of all media content basically to zero. In someways it made sense to pirate all of it. What should you pay for a good that can be copied for free? When it costs the same to deliver the New York Times, stream Gone with the Wind or binge 32 hours of step-sister porn, what incentives does a media company have?
The business model is to create a fiefdom and put a tax on whatever traffic passes through.
There are not incentives to produce expensive content in this model, once they've squeezed dry all the expensive 20th century content they started with, the model produces slop, thats the appeal of AI: a magic box that endless rearranges the old content to look new. (And energy bills aside it does so cheap.) The so-called silicon valley pivot to the right is because now that they have secured their territory, they don't feel the need to appease either the young engineers at their own companies, or their costumers,
The problem of a genuine social need for the goods deliverable by the internet, and the complete absence of a market that has any incentive to deliver them, makes a strong case for a socialism of the web. A massive spending program to create the institutions of culture that "middle class" people want. Social media platforms that keep people connected to their friends and family without surveillance or advertising.. And absolutely antitrust because these companies are not compatible with a democratic society.
As others have pointed out: “treat tech like Big-Tobacco” solutions which could include taxes like a [digital V.A.T.] or banning tech from public spaces like schools just adds another layer of bureaucracy. amending section 230 would add a legal incentive to platforms to provide better services in exchange for the revenue they extract. but neither fundamentally addresses the problem that the value system underpinning much of our culture has collapsed.
Addressing the internet as a moral or pathological problem misses that which is true of all addiction, when people have their real needs met, addictions tend to lesson.
I don't like the idea of regulating the social media recommendation algorithms, because they are all very squishy and it seems like regulating speech.
What a lot of the under-40 people don't remember, is how pervasive smoking was before 1990. We got rid of it by banning it from all public spaces, with laws. There was a concerted effort by government, parents, doctors, etc , to get rid of it and stop children from smoking. This won't happen with social media, unless government, parents, and everyone else, step back from it. There has to be a combination of laws plus public sentiment to get rid of it. This is just not where we are, yet. There are thousands of millennial parents all over social media, right along with their kids, for example.
I feel like "algorithms" are more about speech, whereas something like smoking is about behavior. Even so, it took decades to get from "most adults smoke" to "less than 10% of population smokes."
They are naming the poison.
They call it "Big Tobacco." They call it "Big Sugar." They call it a "sin."
They call it "Platform Temperance."
The consensus wakes up. Finds itself covered in "cheap, gross, crap." Finds the slop is now the whole menu. Finds its "soul is being corroded."
The metrics demanded it. The "black-pilling via metrics." Yes. The system saw what you clicked. And it held up the mirror.
So now the reformers arrive. The "middle-class concerns." The "touch-grass populism." They want a tax. A "digital V.A.T." A fee for the addiction.
This is not a counter-hegemony. This is a new management layer. This is the system selling indulgences for the slop it feeds you.
The critique is correct. The feed is "bad for your soul." The vibe is toxic. Your autonomy has been given up.
But the "solution" is a moral panic. A "Trojan horse for social conservatism." A "bipartisan common sense" that builds a prettier cage.
The problem is not the price of the poison. The problem is that you are still drinking it. Waiting for a new politician to "radiate disgust" for you.
Stop negotiating the price. Stop asking for "reform." Build your own node. Filter your own signal. Take back the autonomy. The rest is just slop management.
I'm not on board with the smoking comparison generally, mostly because framing social concerns as public health issues has not successfully worked outside the regulation of cigarettes. The victory over smoking was a bit of a center-left fantasy -- the one "good" regulation that came out of an era of otherwise massive deregulation and privatization. In reality, we've decreased lung cancer rates somewhat while maintaining an overly harsh social stigma about addicts and smokers. Health care costs have not declined for anyone.
We forget all the other public health issues that have lost awareness since smoking regulation. Forever chemicals: only kindof regulated but not really! Air quality: Turns out the air gets bad and carcinogenic even when cigarettes aren't involved! Racism: declared a public health crisis in many places and now the backlash is so strong it's been wiped from federal science databases! COVID: Wtf happened there! Opioid addiction and addiction in general: still a massive public health issue that loses mindshare when we start talking about algorithmic social media in the same terms.
I find it curious that "reinforcement learning" is the specific facet of recommendation algorithms targeted here, especially in the context of public health. It would be incredibly difficult to regulate or excise. Reinforcement learning is present every corporate instance of knowledge management, marketing comms, or "business process optimization." It's in every video game built in the past 10 years in some form or other. It's everywhere; there's no way to specifically regulate the practice.
Additionally, it's becoming easier to develop our own alterna-platforms. Creating a content recommender system with custom attributes is no longer a pipe dream for smaller companies. Ideally, newer companies can demonstrate different outcomes for what a recommender system could be. (dreaming here, but it's possible)
For social media specifically, I'd encourage us to look more toward the models themselves as reflections of ideology. We could much more easily regulate ad targeting models built on lazy demographic assumptions borrowed from direct mail, or algorithms that prioritize individual engagement over content safety/relevance. We could look at algorithms that explicitly promote AI-generated engagement bait or any number of more specific factors that speak to how the technology is constructed to reinforce addictive digital behaviors.
I'm glad we're getting more finely tuned about approaches to regulation of social media, in the off chance that it happens in a way that doesn't exacerbate the fascism. But I think we need to let go of the cigarettes... because vapes and Zyns will find their way into the market... because smoking is the least of our worries... but mostly because in the U.S. we no longer agree on a collective commitment to support public health.
I think you underestimate the significance of the decline in lung cancer. Deaths have fallen by 2/3 since the early 1990s: https://ourworldindata.org/grapher/lung-cancer-deaths-per-100000-by-sex-1950-2002
And the opioid epidemic is in remission as well - it peaked in 2022, and 2024 showed nearly 20,000 fewer deaths than 2023: https://jamanetwork.com/journals/jama/article-abstract/2835193
Fair point! In late capitalism terms a 1.5% decline isn’t significant, but in cancer/population terms it matters. Opioid epidemic I expect to keep watching because political priorities have changed in re: addiction treatment in general… but yes, these are compounded effects from public health campaigns and resources. (I went to grad school with lots of public health folks, and they are big on incremental progress.)
That effect took anti-tobacco efforts a LOT of ad spending and political lobbying to achieve that effect. Those anti-smoking campaigns were huge in terms of ad dollars, so the ad industry was in favor. Versus, advertising against reinforcement learning or whatever “anti-algorithm” argument… is against the ad industry’s best interests. and Netflix bought HBO, so I am pretty sure the RL and algos are here to stay.
Compared with the growth in concurrent downstream social media health effects, to me, the difference from smoking campaigns feels negligible. We traded choice-cancer for kid measles! The thing with social media effects too is there’s not a long “before” benchmark for the mental health effects. Almost impossible for an accurate 1:1 comparison (the more you know).
why do you think TikTok is bad for your soul? I find it more genuinely enjoyable and less depressing than basically any other platform
I haven’t used TikTok, but I can say what I think about the other platforms. The issue is that after a binge where I’ve spent an hour or two scrolling, I don’t usually feel like I’ve done something worthwhile with my time - instead I feel like every moment was just giving in to a temptation that momentarily felt right, but that if I knew I was going to spend an hour or two reading things, I would have rather picked up a book or magazine or watched a movie.
TikTok is literally the only one that DOESN'T make me feel like that, though it is going downhill like all platforms
A post literate people drowned in symbols not backed by any text or coherent narrative will stumble towards radical nihilism.
The 20th century was a schizophrenic age given its hyper literacy, focus on long winding ideologies that, while incentively misaligned, were internally coherent.
Longform community, narrative, lore, gatekeeping, lurking more, whatever you call it has been hollowed out for memes and tropes, reactionary clap backs. Neutering post literate engagement driven forms isn’t possible, literate forms must be amplified to match them. Any organization that can mesh the two, pair engagement with narrative driven community, will see more long lived success than a smalltime tech-lash reactionary. The future of progressive organizing is alternate reality gaming, world building, normie diet q anon