TuneInTalks
From New Books in Public Policy

On Bullshit in AI

20:40
July 31, 2025
New Books in Public Policy
https://feeds.megaphone.fm/LIT5619479076

Understanding AI Snake Oil: spot hype, hallucinations, and broken promises

Arvind Narayanan unpacks how much of contemporary AI hype functions like snake oil: persuasive, superficially impressive, but often unreliable. This episode contrasts generative models (chatbots, image synthesis) with predictive algorithms used behind the scenes in hiring, criminal justice, and healthcare — and explains why understanding that distinction matters for users, institutions, and policymakers.

Why generative AI can mislead: hallucinatory text and persuasive fluency

Generative models are trained to mimic patterns in massive text or image corpora. Their goal is persuasiveness, not truthfulness, so they can produce fluent but factually incorrect “hallucinations.” User testing and source verification are essential when using chatbots for professional work. Try tools yourself — domain-specific experimentation reveals limitations faster than punditry.

Predictive AI: marginal gains with major consequences

Predictive algorithms operate on historical data to forecast human outcomes (job success, recidivism, medical needs). Often their accuracy is only modestly better than random. When used for high-stakes decisions — detention, hiring, clinical triage — even small accuracy gaps translate into real harm. Narayanan calls attention to systems that function like elaborate random-number generators posing as objective decision-makers.

Two concrete forms of AI snake oil

  • Cheating detectors: Many academic-detector tools perform barely above chance and misclassify non-native writers, creating false accusations in adversarial environments.
  • Automated hiring tools: Video-based screening often relies on spurious correlations — backgrounds or appearance features can skew scores without proving competence.

Practical takeaways: testing, transparency, and prioritizing present harms

Rather than accepting sweeping claims, Narayanan recommends concrete actions: test tools in your context, demand vendor validation and audits, and focus policy on present, measurable harms rather than speculative extinction narratives. Institutional problems are often what make AI snake oil appealing; fixing processes and incentives reduces reliance on dubious automation.

Bottom line: AI has useful capabilities and real limitations. Recognize persuasive design, insist on accountability, and treat grand predictions — from utopia to apocalypse — with measured skepticism.

More from New Books in Public Policy

New Books in Public Policy
Kit W. Myers, "The Violence of Love: Race, Family, and Adoption in the United States"(U California Press, 2025)
Discover how 'love' hides adoption's harms—rethink family, kinship, and care.
1:17:50
Aug 7, 2025
New Books in Public Policy
Timothy W. Kneeland, "Declaring Disaster: Buffalo's Blizzard of '77 and the Creation of FEMA" (Syracuse UP, 2021)
Discover how Buffalo’s 1977 blizzard reshaped FEMA and modern disaster policy.
1:15:17
Aug 5, 2025
New Books in Public Policy
Irvin Weathersby Jr., "In Open Contempt: Confronting White Supremacy in Art and Public Space" (Viking, 2025)
How monuments hide history — hear one author's urgent, personal reckoning.
1:08:53
Aug 8, 2025

You Might Also Like

00:0000:00