The AI Sketchbook is a collection of teaching experiments and research workflows — accounts of what happened when people actually tried using AI in humanities classrooms and projects. Not best practices, not speculation, just a working record of what’s been tested, what’s been learned, and what’s still uncertain.
College instructors are right to be skeptical. AI makes academic dishonesty frictionless and nearly undetectable, and complicates formative assessment, the low-stakes exercises and reflective drafts that help students locate their own confusion, because an LLM can complete them in seconds without any learning occurring. Writing assignments honed over years of iteration become untenable overnight. Yet when faculty are pressured to teach students “how to use AI effectively” — rarely engages with any of this honestly.
The epistemic problems run just as deep. AI fabricates sources with unearned confidence. It reflects the biases of its training data — disproportionately English-language, Western, recent, and already digitized. It produces authoritative, well-structured prose that can be subtly or badly wrong. And it flattens precisely the kind of complexity and ambiguity that makes humanistic inquiry worth doing. These aren’t edge cases. They’re structural features of how these systems work.
So why engage at all?
Because the friction matters — and humanists are unusually equipped to say why. The difficulty of working through an argument, of writing your way toward an idea, of confronting sources that resist easy interpretation—these are the key learning moments that students must recognize they should embrace.
Students who use AI to complete coursework largely understand this. They use it for courses outside their major and do the work themselves when it actually matters to them. The problem is not that they are not always clear about what they might be missing and why the friction matters. The incentive structures of higher education make skipping the friction feel rational.
Humanists who engage critically with these tools help shape what “use AI” actually means in practice — in their departments, their disciplines, and their classrooms. Those who disengage leave that work to others.
The fundamental debates surrounding AI — authorship, bias, authority, expertise, and trust — are longstanding humanistic questions. Humanists have spent decades examining what makes sources reliable, who gets to count as an authority, how power shapes what gets preserved and what gets forgotten.
These are not peripheral concerns. They are central to whether AI is used well or badly. The skills developed through humanistic training — evaluating sources, recognizing bias, attending to what’s missing from an account — are exactly what make AI use meaningful rather than mechanical.
The values and norms around AI are still being formed. Humanists who engage critically can help shape those norms in their classrooms, their institutions, and their fields. Humanists who disengage leave that work to others.
The sketches here are experiments. Some have been run more than others.
Teaching sketches focus on assignments and classroom setups where AI becomes an object of critical inquiry — situations where using AI teaches students something about how knowledge gets made, evaluated, and trusted. The goal is never to outsource thinking. It’s to make the thinking more visible.
Research sketches document workflows: what it actually took to bulk-process a set of archival documents, build a map from a folder of photographs, or generate a 3D model from a line drawing. They include what the tools got wrong and what it cost.
Everything is tagged by status — rough, tested, or refined — so you can tell what’s been tried once versus what’s been iterated and classroom-tested.
Each sketch documents a specific experiment — a single assignment, a research workflow, a tool used for a defined purpose. They share a common structure designed to make them easy to evaluate and adapt before you commit to trying one yourself.
At the top of every sketch, a quick-reference block surfaces four things:
Teaches identifies the conceptual questions or intellectual habits the sketch puts into practice — what students encounter as a problem of interpretation, evidence, or argument.
You gain describes the practical skills acquired through the exercise — the concrete things a student or researcher walks away able to do. These often map onto transferable AI literacies: writing effective prompts, critically evaluating AI-generated output, using tools to process or visualize a body of material.
You’ll need lists the specific AI tools the sketch relies on, so you can verify access before committing to an assignment or workflow.
Format captures how long the sketch takes and what course level it’s suited for. An exercise designed for a graduate seminar may not transfer cleanly to a lower-division survey, and vice versa.
Every sketch also carries a status tag — rough, tested, or refined — indicating how much iteration it has been through. Read the status before you adapt. The most useful part of any sketch is usually the caveats section. That is where the experiment got interesting.
If you’ve tried something with AI in a class or research project — and you have something honest to say about how it went — contribute a sketch. Rough accounts and partial experiments are exactly what this site is for. You don’t need prior GitHub experience; the contribute page walks through the process step by step.
Questions? Reach out at amaranth@unm.edu or drop by studio hours at Mesa Vista Hall 2068.
The AI Sketchbook is a project of Amaranth Digital Humanities Studio at the University of New Mexico, built with the Xanthan open-source framework for academic Jekyll sites.