Students often treat critique as something that happens after a draft is mostly finished. This exercise moves critique earlier. The AI becomes a machine for producing objections on demand — and the students’ job is to decide which ones are generic noise and which ones expose a real weakness in the argument.
Ask students to bring a working thesis paragraph, interpretive claim, or partial draft. They paste that argument into an AI tool and ask it to produce the three strongest objections it can imagine.
Then the real work begins. Students annotate the objections and sort them into three buckets:
That sorting process is the assignment. It makes students articulate why an objection fails instead of merely feeling that it fails.
Running it in class:
That final revision matters. Without it, the exercise stays at the level of commentary. With it, students leave with sharper prose and a more defensible claim.
A counterargument is only strong if it lands on the actual claim being made. This exercise makes that concrete: defending an argument means specifying scope, evidence, and stakes — not just reasserting it with more confidence. Authoritative tone is not the same thing as analytical precision, and students can see that distinction clearly when working with AI-generated objections that sound reasonable but are detached from the actual text.
AI tends to generate objections that are thin, repetitive, or detached from the actual text. That limitation is useful here — but only if students already have something specific enough to test. If the draft is too early or too vague, the exercise becomes generic very quickly.
Best used in upper-division courses, graduate seminars, or any class where students are making sustained interpretive claims rather than summarizing material.