Ask AI for a reading list on a focused scholarly topic. Then verify every citation in public. Some will be real. Some will be distorted. Some will be entirely fabricated while sounding perfectly plausible.
Choose a topic narrow enough to sound scholarly but broad enough that students will not already know the literature by heart. Ask AI for eight to ten key books and articles. Then have students track each citation across library catalogs, publisher pages, journal databases, and Google Scholar.
Works individually, but best as a group exercise where each team verifies two or three citations and reports back. The room usually ends up with a mix of confirmed sources, half-right sources, and fully invented ones.
What to verify:
Unlike more abstract conversations about hallucination, this exercise gives students a task with a clear answer. Either the source exists or it does not. Either the metadata is right or it is not. That clarity makes it a strong early-semester exercise in classes that involve research papers, annotated bibliographies, or historiographic review.
Once students start finding errors, the conversation usually shifts from “AI makes mistakes” to the more useful question: why are we so easily persuaded by the look and tone of correctness?
This works best when students have access to a real library discovery system and at least some guidance on how to search beyond Google. Without that, the exercise can slide into frustration rather than insight.
Frame it carefully: the lesson is not simply “AI bad.” The lesson is that verification is a scholarly habit, and AI gives us a vivid way to show why that habit matters.