Unreal Bird #6: Premortem

Lake Erie, frozen in winter.
The frozen coast of Lake Erie.

Thanks for reading Unreal Birds, a newsletter about how tech, media, and capital undermine democratic accountability.

Forwarded this newsletter? Consider subscribing.

In this issue:

  • Watch my remarks to the Northeast Ohio Indivisible chapter on "Citizenship in the AI era."
  • Last issue, I wrote about how we cannot replace education with AI. This issue I ask: What if we try anyway?
  • Field notes:
  • Scroll to the bottom to see dogs!

What I'm up to

I have some academic publications in the hopper, and a couple other publications are taking longer to get through editorial review than I'd hoped. I'm also pushing through two big Tech Policy Press drafts—one on the transatlantic far-right, and one on the future of free speech. Writing that out actually makes me feel more productive than I thought I had been!

In lieu of those publications-to-come, here is a video of me delivering the remarks on "citizenship in the AI era" I mentioned last newsletter.

Premortem

In my previous newsletter, I wrote about signs that major institutions are replacing sustained learning and thinking with AI shortcuts. Almost immediately after I hit publish, I became aware of a new Brookings study on AI in schools. With that in mind, I decided to write this week's newsletter on the practical consequences of rushed AI integration in the education sector.

At more than 200 pages, the full Brookings study is a heavyweight. It draws on "focus groups, interviews, and expert consultations" with more than 500 people in fifty countries, as well as a literature review of more than 400 articles to deliver a "premortem" on AI in education.

Now, you might be thinking that a "premortem" still implies death at some future point, and so be tempted to write this off as yet another fatalist, overly skeptical take. It's not! Don't do that!

On page 20, for instance, the authors draw distinctions between "AI-diminished learning" which disrupts the social and emotional processes involved in education, leads to dependence, and deepens inequity, and "AI-enriched learning" which increases access for poor, marginalized, or neurodivergent students; saves time, eases capacity constraints; personalizes instruction; and improves assessment.

Some of the ways AI can address gaps in education are rooted in deep global inequity: for instance, women and girls in Afghanistan are using generative AI in place of the formal schooling from which they are banned. AI tools also relieve capacity constraints, providing alternative sources of personalized attention when human instructors are preoccupied. An Indian teacher, for instance, told the authors that for curious and high initiative students, generative AI can be a "goldmine" by allowing them to dive deeper than classroom time allows.

There are also risks, and the report is quite blunt that the risks currently outweigh the benefits—and that our current trajectory is the wrong one.

A fascinating chart on page 54, for example, shows risks identified by students, parents, teachers, and experts. Students worry disproportionately about the development of cognitive skills (e.g. "cognitive offloading"). Put differently, students themselves are most fearful that they are replacing skill development with AI prompting. The tradeoff here is similar to what I wrote in my last issue:

For many students in this study, AI demonstrably improves their work and grades. It provides seemingly correct answers, simplifies and accelerates completion of tasks that students perceive as difficult, and enables them to fulfill what many view as education’s transactional nature— completing assignments for grades. Given this positive feedback loop and their developmental stage, many teenage students lack the executive functioning, metacognition, and self-regulation skills to recognize that learning involves friction and effort and that cognitive offloading poses both immediate and long-term developmental risks. (Page 58, emphasis mine.)

Parents, meanwhile, were more likely to worry about dependence on technology and teachers were more likely to worry about social development and trust. Experts, for their part, have safety concerns.

The report describes these risks as implementation challenges: they are not unavoidable consequences of AI in the classroom. But quality and cost are ever at war. What happens if we go full steam ahead?

The most recent issue of Brian Merchant's newsletter series, AI Killed My Job, has some answers. A community college writing tutor explains how students no longer write their own essays and no longer learn anything of value from their time in the tutoring center. (Ultimately, their morale tanked so low that they quit.) A professor writes that unless a student confesses, their university is loathe to punish AI-based academic misconduct and so instructors are "expected to accept work that is clearly not the student’s as if it were." Other professionals describe being forced to use AI tools under threat of losing their job. A computer science tutor writes that "the majority of students learn nothing."

What's saddest to me is that many of these notes come from professionals at institutions that serve marginalized, underprivileged, and at-risk students: community colleges, open enrollment state schools, tutoring centers for recent immigrants who struggle to write in English. My senior year of college, I was a writing tutor for English language learners at an open-access state university; it was one of the more rewarding jobs I've worked. I'm not sure I'd do it now.

Something tells me that in the future, it won't be college prep schools or prestigious private universities that replace pedagogy with artificial intelligence. Rich kids will get the real thing. The rest of us will make do with synthetics.

Field notes

  • "Four changes in the day-to-day work of Congress that could meaningfully improve governance," NOTUS Perspectives.
    • Panelists submitted a few paragraphs each on ideas including helping Congress rein in SCOTUS, increasing Congressional pay (and staffer pay!), liberalizing the legislative amendment process, and requiring "talking filibusters."
  • "AI and the New Blueprint of Terrorism," Brian Fishman.
    • Terrorists have long been forced to work with imprecise weapons. The democratization of AI and drone technology might change that. I found this more compelling than fears of AI-assisted bioweaponry, for example.
  • "The Billionaire's War," Paul Krugman.
    • We live in the age of unconstrained political megadonors. They made a dramatic swing rightward in the 2024 elections. The Iran war has been an unnecessary fiasco. Were competent plutocrats unavailable for purchase, or do donors just not care?

Snoot watch

A greyhound laying in a pile of leaves holding a green ball.
"Ball is life."

Humble plea

This newsletter is mostly powered by caffeine. If you've made it this far, consider buying me a coffee at the link below:

Buy Me A Coffee