On Bullshit in AI
Understanding AI Snake Oil: spot hype, hallucinations, and broken promises
Arvind Narayanan unpacks how much of contemporary AI hype functions like snake oil: persuasive, superficially impressive, but often unreliable. This episode contrasts generative models (chatbots, image synthesis) with predictive algorithms used behind the scenes in hiring, criminal justice, and healthcare — and explains why understanding that distinction matters for users, institutions, and policymakers.
Why generative AI can mislead: hallucinatory text and persuasive fluency
Generative models are trained to mimic patterns in massive text or image corpora. Their goal is persuasiveness, not truthfulness, so they can produce fluent but factually incorrect “hallucinations.” User testing and source verification are essential when using chatbots for professional work. Try tools yourself — domain-specific experimentation reveals limitations faster than punditry.
Predictive AI: marginal gains with major consequences
Predictive algorithms operate on historical data to forecast human outcomes (job success, recidivism, medical needs). Often their accuracy is only modestly better than random. When used for high-stakes decisions — detention, hiring, clinical triage — even small accuracy gaps translate into real harm. Narayanan calls attention to systems that function like elaborate random-number generators posing as objective decision-makers.
Two concrete forms of AI snake oil
- Cheating detectors: Many academic-detector tools perform barely above chance and misclassify non-native writers, creating false accusations in adversarial environments.
- Automated hiring tools: Video-based screening often relies on spurious correlations — backgrounds or appearance features can skew scores without proving competence.
Practical takeaways: testing, transparency, and prioritizing present harms
Rather than accepting sweeping claims, Narayanan recommends concrete actions: test tools in your context, demand vendor validation and audits, and focus policy on present, measurable harms rather than speculative extinction narratives. Institutional problems are often what make AI snake oil appealing; fixing processes and incentives reduces reliance on dubious automation.
Bottom line: AI has useful capabilities and real limitations. Recognize persuasive design, insist on accountability, and treat grand predictions — from utopia to apocalypse — with measured skepticism.