Greetings from Read Max HQ! In this week’s issue: Public-option tech, and especially A.I.!
A reminder: Notwithstanding the support for this newsletter from the Economic Security Project, the vast majority of this newsletter’s income (which is to say my income) is derived from paid subscriptions. If you’ve been reading this newsletter and enjoying it without paying, I’m really glad to hear that! But perhaps you are also thinking to yourself, “I would buy Max a beer and/or coffee to thank him for the entertainment (?) he has provided through his writing.” Here is the good news: You can effectively do so by upgrading your subscription, for the price of about one beer a month or ten a year. (That is: $5/mo or $50/year.) Some of that money actually will go to beer, though most of it will go to normal stuff like food, housing, an absurd amount of clay for my son to make “cotton candy” with, etc.
Another reminder: I am participating in the “Creator’s Division” for Vulture’s Movies Fantasy League this year. If you love movies and/or fantasy sports, or even if you are just competitive by nature I encourage you to join my mini-league and draft a “team” (of movies) that you think will win a lot of awards and/or do well at the box office. To join the mini-league, just type “Read Max” in the appropriate data-entry box when you draft your ballot. Rules for the fantasy league are located here.
I recently joined the Economic Futures Cohort at the Economic Security Project, a think tank dedicated to advancing “ideas that build economic power for all Americans.” In exchange for a stipend, I’ll be attending three E.S.P. briefings and writing about what I’ve learned and how I’m thinking about it. (E.S.P. has no editorial oversight; I have complete control over what’s published in this newsletter.) The most recent briefing was about public options.
Back in the 2010s, when my main beat was “bad ways that Facebook is changing the world” and my pieces were supposed to end with at least a vaguely constructive suggestion for remedies, I talked a lot about the idea of “public-option” social networks: Publicly owned and democratically controlled platforms that would “compete” with Facebook and its many competitors, peers, and rivals the same way the U.S.P.S. “competes” with U.P.S. or FedEx, or that public universities “compete” with private ones. Admittedly, to at least some extent, this was less a serious proposal than an indulgent exercise in science-fictional counter-histories: What would a “social network” look like if it was designed, from the ground up, to benefit and service users rather than advertisers? What do we want out of a “Facebook,” or a “Twitter,” anyway, and how might sites like that be better designed to accommodate those desires?
But most good public policy has a science-fictional aspect to begin with, and my sense was that even beyond its attractiveness as an intellectual exercise, a public-option social network would genuinely help depressurize the fraught atmosphere of that micro-era of internet expansion. Imagine, as an example, how different a conversation about “deplatforming,” “censorship,” and “free speech” might be in a world where at least one prominent social network was obligated by its very nature to adhere to First Amendment principles, or how debates about privacy and surveillance might unfold when the data and activity of users of one large social network were explicitly protected by the Fourth Amendment. I’m far from the only person to have had this thought: The artist Josh Citarella wrote a wonderful essay making the case for a public-option social network run by (who else?) the Post Office:
Each day, users are given a capped total of 24 ‘Likes’. This number is reset once daily. If you use them up before the time elapses, you may continue to browse further content but no longer assign new ‘Likes’. This limited number of daily ‘Likes’ will encourage users to respond to more meaningful content and carefully consider how they spend their limited up-votes. Unless ‘Likes’ are scarce, news feeds will inevitably tip towards clickbait. To be sure, some users will carelessly assign their ‘Likes’ to pop stars or cute cats, and run out before they scroll further to see their cousin’s baby announcement. This will certainly result in awkward conversations the following day and provide ample opportunity for users to reconsider their values and priorities. […]
Content moderators work in two shifts, from 5:00 am - 1:30 pm and from 2:30 pm to 10:00 pm because StateBook is closed for lunch. Outside of open hours, users will be able to browse content but they will need to wait until the site opens the following day to send new messages. While this was initially perceived as a mild inconvenience, users were ultimately grateful for the new rules because (lets be honest) anything posted after 10pm is usually regretted the next day. Twitter, Meta and other inferior private couriers may graciously accept a small bump in their traffic during these off hours.
Of course, at the time, and still to a large extent today, the idea of any kind of public-option anything was too far-fetched to hope for. Despite the internet’s own origins in public research and infrastructure, the culture of Silicon Valley has always been too ostensibly “libertarian” to countenance “government-owned” anything, and in this, the software industry is hardly alone--the standard-issue stand-up comic conventional-wisdom of the ‘80s, ‘90s, and ‘00s always held consistently that government agencies were wastelands of incompetency and inefficiency, and that only a fool would suggest that the U.S. government provide or administer any services at all. (How many times have you heard someone reference “the D.M.V.” in the abstract as an object lesson in why the government shouldn’t be allowed to run anything at all?)
This haughty Reaganite dismissiveness of public managerial competence didn’t make sense at the time, especially with respect to Silicon Valley: The U.S. Mail is an unbelievable feat in management and logistics and networking that should make software entrepreneurs weep in admiration, and intermittent rudeness at the post office is not exactly a refutation of its impressiveness! And I suspect it makes even less sense now, especially in the “Enshittification Era” of the internet: No longer do Google or Facebook or even Apple impress us with their smooth efficiency, consistency, it-just-works-ness, or even good design. (Neither do FedEx or U.P.S., for that matter.) If we’re going to deal with huge bureaucracies trying to operate on the cheap, they might as well be publicly owned, operated, and orientated.
Renewed curiosity about public provision in the post-Obama era has led to some interest in thinking through how “the internet” and its many layers (from telecom infrastructure to platform services) might be thought of as public goods and services or even “natural monopolies” that obligate some level of public administration. Even so, it’s still hard to imagine fighting entrenched Silicon Valley oligarchs (not to mention Cold War-hangover biases against social ownership) to build a social network--or email provider, or search engine--that could really function as an “option” against your Facebooks or Googles.
But what if, instead of trying to intervene within a well-established and already socially and economically embedded technology like the internet, we were talking about a nascent and still somewhat inchoate tech--like, say, the many different layers and technologies that make up we all call “A.I.”? The dream of a fully and truly “public” internet may still seem distant, but to the extent that A.I. is as potentially transformative a technology as the web, we are at the opportune moment to ensure some kind of genuinely public stake in its development, deployment, administration, and benefits.
On a practical level, “public A.I.” can mean many different things, including various kinds of public-private partnerships, some frameworks of which are outlined in this interesting white paper from the Vanderbilt Policy Initiative. But I find myself most interested “public-option” A.I., by which I mean building out publicly owned, accessible, and directed tech at every level of the A.I. “stack”:
Public-option infrastructure (i.e., datacenters available to government organizations, academic researchers, nonprofits, ordinary citizens, and anyone else who isn’t among the half-dozen leading private-A.I. companies)
Public-option data (i.e. freely accessible, freely assembled datasets, databases, and training corpuses for use by anyone, under limited restrictions)
Public-option models (i.e. true open-source models trained and refined under public supervision toward democratically determined goals and made available with transparent documentation)
Public-option apps (i.e. top-layer apps created to be useful rather than profitable)
Obviously this is, in the science-fictional spirit, a bit hand-wavey: What does “democratically determined” or “freely available” mean in practice? But pursuing the answers to those questions--asking ourselves how “A.I.” should be designed, for what uses and to what ends--seems of really paramount importance, and precisely why the “public option” is so compelling.
We’re in the midst of an era of investment in and deployment of A.I. tech--at every level of the stack listed above is--at an astonishing scale and pace, backed by unimaginable amounts of capital. But that investment, which is directed by a very small handful of people, is more or less explicitly geared toward reducing the profit share and power of labor by replacing skilled workers with (I guess) chatbots, rather than any of the multiple potential uses or outcomes that might be suggested by the idea of large language models. Even the idea that the main or best way to interact with these models is as an obsequious “chatbot” has only taken hold because of decisions being made at the upper levels of companies like OpenAI. If L.L.M.s could be anything, with no requirement that they scale up to billions of users immediately, or generate income from addicted users or eager managers--and if we could collectively determine the direction of their development and research--what else might they be?
There are obvious objections to the idea that democratically or publicly directed technological development would be ipso facto better than whatever the market is coming up with, but the beauty of the public option as a framework (and precisely why public options are so difficult to implement and maintain as institutions!) is that this Public A.I. exists alongside “the market” as a tool or framework for discovering uses and advancing research. If you truly believe Sam Altman is the best person to decide what A.I. “is,” a public option doesn’t prevent him from doing so--it just holds the door open for other possibilities.
And there are other possibilities! Here in New York, seven public and private universities have joined with the state government under the banner of “Empire AI” to build out computing infrastructure that can be used for responsible research. As Ganesh Sitaraman and Karun Parek explain, there are knock-on benefits to the public approach:
Empire AI in New York State exemplifies this model. The state functions as a financial backer, leaning on scientists and researchers at partner universities to guide technical decision making. This approach allows researchers to benefit from financial support and provides governments an opportunity to pursue public policy goals. In this case, New York has placed the compute center at the University of Buffalo where massive amounts of hydropower are readily available, in line with the state’s clean energy goals. This siting decision also helps distribute potential economic benefits of AI across the state and away from its economic core, New York City.
Closer even to a true “public option” is some recent developments from the Public AI Network, a nonprofit dedicated to advancing public A.I., which built a proof-of-concept public A.I. for Switzerland (to be specific, a public model and public app trained on public data), as Gideon Lichfield wrote about recently in the Financial Times:
Apertus was built by the Swiss government and two public universities. Like Humain’s chatbot, it is tailored to local languages and cultural references; it should be able to distinguish between regional dialects of Swiss-German, for example. But unlike Humain, Apertus (“open” in Latin) is a rare example of fully fledged “public AI”: not only built and controlled by the public sector but open-source and free to use. It was trained on publicly available data, not copyrighted material. Data sources and underlying code are all public, too.
Although it is notionally limited to Swiss users, there is, at least temporarily, an international portal — the publicai.co site — that was built with support from various government and corporate donors. This also lets you try out a public AI model created by the Singaporean government. Set it to Singaporean English and ask for “the best curry noodles in the city”, and it will reply: “Wah lau eh, best curry noodles issit? Depends lah, you prefer the rich, lemak kind or the more dry, spicy version?”
Apertus, Lichfield writes, “is not intended to compete with ChatGPT”--it’s much too small--which makes it not much of an option, even if it is public. But it’s a neat proof of concept, and an idea to be built upon. I doubt I’ll ever get a U.S. Posting Service social network in my lifetime. But I can dream of a Library of Congress-administered large language model.
I like the idea of a public AI utility, but also it makes me think about how my city-sponsored wifi was recently sold to a private company. Or the several times Minnesota public data has been hacked. Even with a welcoming policy environment, maintaining public comms projects in an era of whiplash shifts is a huge long-term challenge.
Very cool read Max. I often click away feeling a little more hopeful..