Platform Temperance
Notes on a new wave of the Techlash
Greetings from Read Max HQ! This week, a collection of thoughts about a new trend in tech criticism masquerading as a lumpy and overstuffed essay.
A reminder: This piece, and all the pieces you read on the Read Max newsletter, is funded almost entirely by paying subscribers of this newsletter. I am able to write these columns and record the podcasts and videos thanks to the support of nearly 4,000 people who value what I do, for whatever strange reason. Unfortunately, because of the reality of subscription businesses, I need to keep growing in order to not shrink, which means every week I need to beg more people to sign up. Do you like Read Max? Do you find it entertaining, educational, distracting, fascinating, or otherwise valuable, such that you would buy me a cheap-ish beer at a bar every month? If so, consider signing up for the low price of $5/month or $50/year
A new wave of techlash
The new moderate-liberal Substack publication The Argument ran a fascinating piece by civil rights attorney (and Tottenham blogger) Joel Wertheimer last week arguing that policymakers should “Treat Big Tech like Big Tobacco”:
The problem with Big Tobacco was not that it could charge excess prices because of its market power. The problem with Big Tobacco was that cigarettes were too cheap. Cigarettes caused both externalities to society and also internalities between the higher-level self that wanted to quit smoking and the primary self that could not quit an addictive substance. So, we taxed and regulated their use.
The fight regarding social media platforms has centered around antitrust and the sheer size of Big Tech companies. But these platforms are not so much a problem because they are big; they are big because they are a problem. Policy solutions need to actually address the main problem with the brain-cooking internet.
Wertheimer argues that the famous Section 230 of the Communications Decency Act--which protects companies from liability for content posted by users to their websites--needs to be reinterpreted to exclude “platforms that actively promote content using reinforcement learning-based recommendation algorithms.” I’m not exactly qualified to weigh in on the legal questions, but I find the logic of the argument persuasive in its broad strokes: The idea is that while message boards and blog comment sections--which host third-person speech but do nothing active to promote it--deserve Section 230 protection, platforms that use algorithmic recommendations (i.e. Facebook, Instagram, TikTok, and X.com) are not simply “passively hosting content but actively recommending” it, an act that should be considered “first-person speech” and therefore subject to liability claims.
But what really strikes me about Wertheimer’s piece is the public-health metaphor he uses to explain the particular harms of social-media platforms (and that, in turn, justify his remedy). The contemporary web is bad for us, the argument goes, in the way cigarettes are bad for us: Cheap, readily available, highly addictive, and making us incredibly sick at unbelievably high cost.
In this, Wertheimer is following a line of argument increasingly prominent among both pundits and politicians. In April, David Grimes made a less policy-focused version of the same argument in Scientific American; just last month, speaking with Ezra Klein on his podcast, Utah Governor Spencer Cox drew on both Big Tobacco and the opioid industry:
The social graphs that they use, which know us better than we know ourselves, that allow us, as you so eloquently stated and better than I could, to understand what makes us emotional and what keeps our eyeballs on there — so that when a kid is somehow, even if they don’t want to be, on TikTok at 3 a.m., just going from video to video, and they’ve given up their free will — that is unbelievably dangerous.
When tobacco companies addicted us, we figured out a way out of that. When opioid companies did that to us — we’re figuring our way out of that. And I’m just here to say that I believe these tech companies, with trillion-dollar market caps combined, are doing the same thing — the same thing that tobacco companies did, the same thing that the opioid companies did. And I think we have a moral responsibility to stand up, to hold them accountable and to take back our free will.
A few days after Wertheimer’s piece, Abundance author Derek Thompson posted a podcast interview of Massachusetts Representative Jake Auchincloss, who has proposed a digital value-added tax designed, like Wertheimer’s proposal around Section 230, to internalize the costs of social media. In his introduction, Thompson directly compared the digital V.A.T. to “sugar taxes and cigarette taxes”:
Massachusetts Congressman Jake Auchincloss has a proposal that he calls a digital sin tax, a way to push back on the business model of social media platforms that profit from hijacking our attention, especially our kids’ attention.
You’ve heard of sugar taxes and cigarette taxes. Well, this would be an attempt to price the harms of the attention economy and route the proceeds to public goods. I think it’s an interesting idea.
Comparing Big Social to Big Tobacco (or Big Opioid or Big Sugar) is in some sense a no-brainer, and certainly such analogies have been drawn many times over the last few decades. But the increasing popularity of this conceit is less a coincidence, I’d argue, than a function of the gathering power of a new wave of the now-decade old “techlash.”
This burgeoning movement seeks to root criticism of (and response to) Big Tech in ideas of health (public, social, intellectual, and spiritual) and morality rather than size and power, positioning the rise of social media and the platform giants as something between a public-health scare and a spiritual threat, rather than (solely) a problem of political economy or market design. I see versions of this school of thought not just in speeches and op-eds from Auchincloss and Cox or blog posts from Thompson, but in Chris Hayes’ book The Siren’s Call and in the inescapable work of Jonathan Haidt. (You might think of Hayes and Haidt as representing “left” and “right” tendencies of the broader movement.) Notably, all of the above-mentioned have found platforms on Klein’s podcast. Back in a January interview with Hayes, Klein offered up a kind of political vision or prediction rooted in this tendency:
I think that the next really successful Democrat, although it could be a Republican, is going to be oppositional to [the tech industry]. In the way that when Barack Obama ran in ’08 — and I really think people forget this part of his appeal — he ran against cable news, against 24-hour news cycles, against political consultants.
People didn’t like the structure and feeling of political attention then. And I don’t think there was anywhere near the level of disgust and concern and feeling that we were being corroded in our souls as there is now.
And I think that, at some point, you are going to see a candidate come up who is going to weaponize this feeling. They are going to run not against Facebook or Meta as a big company that needs to be broken up. They’re going to run against all of it — that society and modernity and politics shouldn’t feel like this.
And some of that will be banning phones in schools. It’ll have a dimension that is policy. But some of it is going to be absolutely radiating a disgust for what it is doing to us and to ourselves. I mean, your book has a lot of this in it. I think that political space is weirdly open, but it seems very clear to me somebody is going to grab it.
Thompson dubs this loose movement, or at least the version touted by Auchincloss, “touch-grass populism,” but I think this is wrong: The framework in question is distinctly not “populist” (unlike, say, the neo-Brandeisian “new antitrust” movement that has been a major focus of the “techlash” to date) so much as progressive in the original sense, a reform ideology rooted in middle-class concerns for general social welfare in the wake of sweeping technological change. At its broadest you could maybe call this budding program of restriction, restraint, and regulation “Platform Temperance.” But that name describes a big tent, and like the progressive movements that emerged in the late 19th century, it can produce both grounded liberal-technocratic visions, and paternalistic, pseudoscientific, and ultimately harmful moral panics.
Why Platform Temperance now?
One way of thinking about the past half-decade or so of life on the internet is that we’ve all become test subjects in a grand experiment to see just how bad “good enough” can be. Since Elon Musk’s purchase of Twitter in 2022, and the subsequent industry-wide cutbacks to “trust and safety” teams meant to moderate content, most of the major social platforms have been flooded with fraud, bait, and spam--a process exponentially accelerated by the arrival of ChatGPT and its generative-A.I. peers.
What I want to emphasize here is not the increasingly corrosive politics, complaints about which are well-worn, but the ever-increasing volume of cheap, gross, crap. Take, e.g., these YouTube ads discovered by BlueSky user Ken Plume:
As I tweeted at the time, I’ve been covering tech companies for years and I still find myself taken aback at how completely they’ve abdicated any kind of oversight or moderation. Facebook and Instagram and YouTube are utterly awash in depressing low-rent non-political slop, and no one who owns, runs, or even works at these platforms seems even to be embarrassed, let alone appalled.
But why would they be? People working at tech giants are watching the metrics and seeing that the depressing low-rent slop is getting engagement--probably even to a greater extent than whatever expensive, substantive, wholesome content it’s being placed next to on the feed. Their sense, backed up by unprecedentedly large data sets, is that slop of various kinds is what people want, because it’s what they click on, watch, and engage with. (I would even go so far as to suggest that some portion of Silicon Valley’s reactionary turn since 2020 can be chalked up to what I think of as “black-pilling via metrics”: The industry’s longtime condescension toward its users finally curdling into outright contempt.)
For much of the past decade, this revealed preference for fake news, engagement bait, sexualized content, and other types of feedslop has been blamed on “platform manipulation”: Bad, possibly foreign actors were “manipulating” the platforms, or, worse, the platforms themselves were “manipulating” their users, deploying “dopamine feedback loops” and “exploiting a vulnerability in human psychology,” as Sean Parker said back in 2017.
But these accounts have never been wholly satisfying: Too technical, too determinist, too reliant on the idea that there is “authentic” or “innocent” desire being “manipulated” with malicious intent. In some sense they don’t blame we, the users, enough, or assign us the kind of agency we know we have: Most of us are aware from everyday experience that we desire all kinds things that are “bad” for us, and that we give in or restrain from temptation based on any number of factors, without directly manipulative actors ever needing to enter the picture.
Maybe worst of all, the “platform manipulation” arguments often seem to imply there is some non-manipulated “good” or “healthy” version of the platform feed--an assumption increasingly untenable to people who’ve spent the last 15 years doomscrolling.
This, I think, is the basic dynamic from which Platform Temperance evolves: a general, non-partisan, somewhat moralistic disgust at even the non-political outcomes of unregulated and unmoderated platforms and a dissatisfaction with both the “revealed preference” framework that justifies their persistence in Silicon Valley and the more rigidly behavioralist explanations that seem to exonerate the business model and blame externalities on bad actors. To this I would add environmental factors like:
A perceived accelerating downward spiral with the advent of generative A.I., which both further debases the platforms and provides a possibly even more tractable and dangerous user experience itself;
a generation of (middle-class) millennials, now raising children in a world dominated by slop and screens, coming into political power;
a vaguely moralized sense of unease and distaste over the fast creep of gambling and gambling-like dynamics in financial and prediction markets;
a recognition, in the wake of the reactionary turn of prominent tech executives, that Silicon Valley is not going to “fix” any of this itself; and
a growing sense among liberals, Democrats, and anti-Trump Republicans that social platforms are inevitably corrosive to the liberal-democratic project.
In response, Platform Temperance offers a focus on health, social welfare, and the idea of discipline and restraint in the face of unmoderated consumption--that is, temperance.
The politics of platform temperance
There’s another aspect to mention here. Platform Temperance as it has evolved recently seems to be largely a school of thought from the (broad) center--adjacent, in its membership and institutional affiliations, to the “Abundance” faction of elite politics.
In some sense this is a function of the final bullet point elaborated above. An ambient fear that TikTok or YouTube “help Trump” in some direct or indirect capacity--matched to a lesser extent by a belief on the other side that these same platforms enable “wokeness”--has helped cement the sense among moderates that these platforms have externalities that must be addressed.
There’s also an obvious cynical reading: As Klein says, this “political space is weirdly open,” and “somebody is going to grab it.” Thompson’s podcast interview with Auchincloss is framed around the idea of Platform Temperance (or “Touch-Grass Populism”) as a “big idea” around which moderates can rally:
Winning elections has to be more about articulating a mission than arriving at the midpoint of every possible debate… The center has not been nearly as successful [as the Trumpian right] at pouring itself into truly bold visions of a better future. So today we’re going to talk about one very big idea from a politician who doesn’t maybe neatly fit into any particular Twitter tribe. […]
I think [digital V.A.T.] a somewhat problematic idea. But more than interesting and problematic, I think it’s a big idea. And I’m excited that someone in the political center is offering it.
It’s easy, given this kind of positioning, to read Platform Temperance as a new front in an ongoing factional war within the Democratic party--and, indeed, the tendency is often pointedly positioned against the more populist and anti-establishment New Antitrust movement that has been among the most prominent strains of the Techlash thus far. (There is a lot more to be written about the “balance sheet” of New Antitrust as a tool to deal with the problems of the tech industry.)
But while the political valence is important for context, I don’t know that Platform Temperance is wholly (or even mostly) a cynical Trojan Horse for intra-Democrat political battles. Nor do I think it’s strictly a moralized establishment reaction to a political spectrum expanded by the democratizing effects of tech platforms--though I think elements of that impulse exist.
Versions of the ideas, remedies, and rhetoric emerging from the big tent that I’m calling Platform Temperance have been circulating in the Techlash for many years, usually from left-wing critics and academics. I’m thinking here of, e.g., James Bridle’s A New Dark Age, Jenny Odell’s How to Do Nothing, and Richard Seymour’s The Twittering Machine, three excellent books that take psychosocial and psychoanalytic approaches to the problems posed by platforms. In the early days of this newsletter I myself made a kind of proto-Platform Temperance argument under the headline “Maybe we need a moral panic about Facebook”:
I’ve written a lot about Facebook and its peers over the last half decade or so, and I can say pretty conclusively that people are, in general, much less interested in structural accounts, no matter how rigorous and explanatory, than they are in writing about the affective dimension of tech power. “Structure” is abstract and difficult to touch; feelings are immediate and intuitive. Readers want help articulating how life on the platforms makes them feel, and why.
I think the change-vs.-continuity debate, focused as it is on Facebook’s role in the political and media spheres, can miss entirely that the affective critique of Facebook — Facebook is bad because it makes people feel bad — is the most powerful. That’s not to say it’s empirically rigorous, or that it’s wholly new, or that it’s not a way to blame the fundamental dynamic of capitalist alienation on a new technology. But if the point is to transform Facebook from something that works on us to something that works for us — and, barring that, to shut it down — it’s useful to remember what people hate about it.
I still believe--as Klein does--that there is a lot of political power in harnessing people’s deep ambivalence about (or outright disgust with) a platform-mediated social, cultural, and political life. And I often find myself eager to reach for public-health metaphors when discussing the experience of life under the thumb of the software industry, if not outright spiritual ones. (I don’t believe in the “soul,” but I am hard-pressed to think of a better way to succinctly describe the effects of, say, TikTok than to say it’s bad for your soul.)
But I also want to be conscious of how easy it is for this kind of rhetoric to slip into reactionary moral panics. As David Sessions recently wrote:
My working hypothesis is that neo-atomization discourses have formed a knot of moral panic about technology that is being used as a Trojan horse for social conservatism. In some quarters, like the drafters of Project 2025 or Michigan’s outrageous proposed ban on porn and VPNs—that revived social conservatism is simply the usual Christian-right suspects doing their thing. But even among Christian social conservatives, explicit appeals to God and natural order have been replaced by a pseudo-public health language of “epidemics,” “addiction,” and social breakdown. What’s perhaps even worse is the extent to which essentially the same ideas have conquered liberal discourse in places like the New York Times op-ed page; a liminal figure like Christine Emba stands as something of a bridge between the two.
Emba’s book Rethinking Sex is a perfect example of this: you don’t even have to say that porn is immoral and casual sex is damaging (though she does that, too); you can say technology is alienating; apps are disrupting our natural, healthy forms of relation and association, harming women and children; our sex and relationships are dehumanizing and socially corrosive because we basically treat them like ordering DoorDash. You can even throw in some superficial anti-capitalism to give it a progressive spin. I’m not sure people even realize how dominant these views have become among liberals and how, in the absence of true ethnographic curiosity about the forms of life the social web has created, authoritarian policy responses are starting to sound like bipartisan common sense. […]
Here are just a few reasons contemporary tech panic needs to be pressed further than it often is: social scientific research about the “loneliness epidemic” is in fact highly contested, as are linkages of depression and poor mental health to phones and social media. “Porn addiction” is a made-up, faux-medical rebrand of classic right-wing evangelical ideology, and the evidence that porn is all somehow “increasingly” violent and misogynistic is flimsy enough to basically constitute a folk myth, no matter how many times it’s repeated in the New York Times.
Take, as a recent example of the dangers of a loose and misdirected Platform Temperance, the proposed ban and sale of TikTok. Originally spurred on by an alliance of China hawks and Israel supporters concerned about unprovable malign influence and pro-Palestine discourse enabled by the platform, the idea of a ban was nonetheless received warmly by Democrats and Platform Temperance-sympathetic liberals. Now, under Trump, the app is apparently set to be sold on the cheap to a consortium of right-wing businessmen. Platform Temperance as a political framework has a lot to offer tech skeptics, critics, and “the left” more broadly. But we should be careful and rigorous about how we develop political solutions to the affective misery so much of us feel online.




I appreciate these reflections. I'm stuck on that line from the Argument, that the platforms are not a problem because they are big, but rather big because they are a problem. This strikes me as precisely the key reactionary move that turns what *could* be a substantive reflection on the harms of platforms into a sop to Big Tech: "just manage the harms and you don't need to break them up." Oh, word? And who will manage the harms, RFK Jr.? I don't buy it. This will only result in do-nothingism at best or further radicalization in the name of health at worst.
It's reasonable in principle to be concerned with mitigating consequences and not just nipping causes in the bud, and it's *possible* that material change can happen downstream of cultural intervention generally speaking, but like, if slop is a problem, consider how much capital has been required to produce it! If the companies weren't so big, how could the health harms exist at such scale? In other words, this is what to look out for: someone who says the cultural harms are a problem is likely right, but someone who deemphasizes the material conditions of operations that require massive amounts of capital in favor of a focus on culture isn't serious, even or especially about the culture.
I mean the difference between Big Tech and Big Tobacco is that a cigarette isn't speech whereas a social media post is. There's a very dangerous animalising 'reduction to bare life' logic to all this, particularly in these public health/epidemiological framings, which I think needs to be pretty emphatically rejected. Like it's strange to reach for temperance movements, which dealt with intoxicating substances with no propositional content to them, as your historical analogy when the big glaring historical precedent surely is fascist censorship of degenerate art. What else does 'slop' mean here other than as a neologism for precisely the same problematic: the decivilising socially destructive impacts of degenerate forms of expression on the body politic? It's sort of besides the point whether or not one agrees with the aesthetic/intellectual/moral/whatever criteria by which it's judged to be worthless slop; one ought nonetheless want to retain the freedom to decide this for oneself, because what is lost in the taking over of this role by the state isn't just the content in question, it's political life per se in the Agambenian sense.
There also seems to me in this to be a troubling slippage between trying to remove moral hazard from the process by which the underlying infrastructure is designed (which doesn't immediately raise such concerns for me) and proposals which in practice amount to sweeping censorship of particular content. Section 230 is a speech protection for ordinary users, even if it's immediately a protection of corporations from liability: it's a measure that prevents platforms from being compelled by liability to institute overbearing surveillance and censorship practices, for which the incentives are perversely biased in the direction of conservatism (there's far more to lose by being too soft and incurring liability than there is to gain by standing up courageous for gray area expression and edge cases). Get rid of Section 230 and the consequence is you'll unleash a wave of censorship of the arts, political and critical speech, LGBT expression and sex education material and so on that the Moral Majority could only dream of, which will clearly not break out on lines that any left-wing person will understand as reasonable (I mean, just take a look at what's currently happening wrt "antifa" and trans expression), and which doesn't actually impact the addiction-forming technical mechanisms that this is all ostensibly about. This all seems extraordinarily dangerous to me.