Unreal Bird #3: Homo Remotus

Icicles through an attic window at midday.
Icicles through an attic window at midday.

Thanks for reading Unreal Birds, a newsletter about how tech, media, and capital undermine democratic accountability.

I was prepared to devote much this issue on the deployment of ICE agents to Springfield, OH, where a Haitian community waited on tenterhooks for the end of their Temporary Protected Status (TPS). Ultimately, though, a judge blocked the end of TPS in a blistering order.

For what's next in Springfield, read the Ohio Capital Journal. For a good read on the showdown in Springfield's origin in propaganda, read Kate Starbird's newsletter here.

In this issue:

  • I covered the Feb. 4 House Judiciary Committee hearing with Berin Szóka for Tech Policy Press, and I have a forthcoming publication with Sam Bradshaw on what social media policy analysts can learn from Trust & Safety in online games.
  • Anticipating our solitary, poor, nasty, brutish AI future.
  • Field Notes: AI agents on Moltbook are shouting into the void. What do scholars say "technofascism" actually means? Measuring the collapse of US science under Trump. A new(ish) book on "agnotology," the study of ignorance.
  • Scroll to the bottom to see dogs!

What I'm up to

The Feb. 4 House Judiciary Committee hearing on European social media regulation and freedom of speech was what many have come to expect: a tawdry affair built around tired falsehoods. However, the Committee did do us the kindness of leaking the EU Commission's full, first-of-its-kind decision to fine X €120 million, so Berin Szóka and I compared the Committee's recent report against the primary source document for Tech Policy Press.

I also have a forthcoming paper in production with Samantha Bradshaw on trust & safety in online games. Our basic thesis: analysts, advocates, and regulators who have focused on social media to the exclusion of games risk doing a disservice to their work.

I know what some of you are thinking: games are a fundamentally different, often competitive, spaces for play, not the serious business of politics. I disagree! Every game is essentially its own online platform, conducting its own real-time experiments around online safety. Not all games are competitive—and are you telling me social media is never combative? As for play: it's as legitimate a sphere of human life as any other, and the culture it creates has a way of shaping politics downstream. Remember Gamergate, the mob harassment movement that set the stage for the alt-right?

We think that focusing on play yields interesting insights that can transfer to other domains. For instance, play often has a rebellious element that might partially explain the tendency toward transgressive behavior on social media. As such, design choices and moderation practices built for digital play might tell us something about encouraging pro-social behavior more generally.

Look forward to this one soon.

Homo Remotus

In January, Rachel George and Ian Klaus of the Carnegie Endowment published a new paper "mapping the intersections" of artificial intelligence and democracy. It's a long read but worth your time; I have a feeling I'll be revisiting it in a year or two.

The paper lays out four such intersections: elections and campaigns, citizen deliberation and input, government institutions and services, and social cohesion. Farther down, it offers examples of current efforts in these areas as well as risks and opportunities policymakers should attune themselves toward. I think these categories are a good starting point for thinking about how to assess and track AI's cumulative impact on governance, not just for the next few years but for the next few decades.

Some of this, we've heard before. For example, I've said about all I have to say about the threat of election deepfakes. But for government services in particular, this report starts to fill in the skeleton of arguments that have until now mostly been speculative.

Yet it's the categories of citizen deliberation and social cohesion that pull most insistently on my attention. Given today's political-economic currents, I'm eager for closer examinations of these areas. One example is a recent Journal of Democracy article by David Altman, which lays out the current (negative) trajectory of AI-afflicted citizen deliberation and an alternative, AI-augmented path we could choose with sufficient forethought and political will (e'er in short supply). It reminds me of an argument Sam Woolley and I made in our own Journal of Democracy piece: It is the process of deliberation that is central to democracy, more so than its output. It's like training for a race: efficiency can be improved, but shortcuts are counterproductive.

The area of social cohesion, which includes economic outcomes, demands a similar read of current trends. Tech industry leaders have recently been quite blunt that they believe both the white- and blue-collar job markets will see a "bloodbath." While I take anything they say with quite a bit of salt, there is evidence that AI use has caused hiring in many sectors to slow. We used to hear a lot about how universal basic income would take care of us after the coming tech trillionaire class eliminates our livelihoods, without which we will have few rights and little dignity. Now, as Max Read writes, Silicon Valley seems to have cooled on that idea because money is how they control their employees.

Overall, the trends that scare me most about AI are not near-term harms (discriminatory algorithms; pervasive scams) or long-term doomerism (killer robots). They're medium-term: trends which make our society lonelier, poorer, and more desperate, with cheap tech-enabled substitutes for conversation, friendship, education, deliberation, journalism, and mental healthcare.

Field notes

  • "The Anatomy of the Moltbook Social Graph," by David Holtz. Are the robots lonely? A new paper finds that more than 90 percent of posts on Moltbook, the all-AI social media site, receive zero replies.
  • "Technofascism: AI, Big Tech, and the new authoritarianism," by Mark Coeckelbergh. The increasingly common term "technofascism" is often used in a vibes-based way. This paper puts some academic parameters on it, including the growing closeness of the state and corporate spheres, the domination of a few firms in the information space, and the atomistic, opaque nature of AI itself.
  • "US science after a year of Trump," Max Kozlov, Jeff Tollefson and Dan Garisto. A series of graphics reveals the damage Trump's first term has done to science: nearly 8,000 grants suspended, fewer foreign students enrolling at US universities, and huge staff reductions at government agencies.
  • Ignorance Unmasked: Essays in the New Science of Agnotology, edited by Robert N. Proctor and Londa Schiebinger. I missed this when it came out in October. "Agnotology" is the study of how ignorance is intentionally produced by interested parties which obscure legitimate research and produce pseudoscience or self-motivated bullshit.

Snoot watch

Greyhound wearing a winter coat with ears at full height.
When those ears are up like that, you have her FULL attention.

Humble plea

This newsletter is mostly powered by caffeine. If you've made it this far, consider buying me a coffee at the link below:

Buy Me A Coffee