AI Sketchbook

AI tools have gotten genuinely capable at working with text, images, and data — and genuinely accessible to people without technical training. A historian can now interrogate thousands of pages of archival material in an afternoon. A literature class can trace patterns across an entire genre. An oral history project can make decades of recordings searchable within hours. These are not hypothetical capabilities. They are things happening now in humanities departments.

That reach comes with real costs. AI fabricates sources with unearned confidence. It reflects the biases of its training data — disproportionately English-language, Western, recent, and already digitized. It produces authoritative prose that can be subtly or badly wrong. And it flattens exactly the kind of complexity and ambiguity that makes humanistic inquiry worth doing.

Which is exactly why humanists need to be in this conversation — not just as users, but as critics, experimenters, and the people best positioned to say what matters and what doesn’t.

The Sketchbook

The AI Sketchbook is Amaranth’s ongoing record of what actually happens when humanists at UNM try using AI in their teaching and research. Not best practices. Not polished success stories. Real experiments, honestly reported — what was tried, what the AI produced, what worked, what didn’t, and what questions it raised.

Teaching sketches document assignments and classroom setups where AI becomes an object of critical inquiry — where students learn something about how knowledge gets made, evaluated, and trusted, not just how to produce text faster.

Research sketches document workflows: what it actually took to analyze a document collection, build an interactive map, or generate a 3D model from archival drawings. They include what the tools got wrong and what it cost to fix.

Visit the AI Sketchbook →

What humanists can explore with AI

Patterns across large text collections. Hundreds of newspaper articles, letters, oral histories, or literary works. AI can help identify recurring themes, trace how language shifts over time, or flag unexpected connections across a corpus. You still decide what the patterns mean — but you can now see across a collection in ways that close reading alone can’t achieve.

Transcribing and searching oral histories. Hours of recorded interviews, made searchable across the full set. AI transcription needs human review — it stumbles on names, accents, and specialized terms — but starting from a draft rather than silence saves enormous time and makes large collections usable in ways they weren’t before.

Analyzing visual collections. Photographs, artworks, maps, or material objects. AI can help identify visual patterns, tag and categorize at scale, or generate descriptions that make collections searchable by text.

Research orientation and literature mapping. Starting a new project or entering an unfamiliar field. AI can help map intellectual terrain — summarizing key debates, suggesting search terms, pointing toward sources. Think of it as a well-read but unreliable research assistant: useful for orientation, always in need of verification.

Student projects with real public presence. Students can use AI to build things that previously required coding skills: searchable archives, interactive timelines, annotated maps, text analysis projects. The technical ceiling for humanities student work is genuinely higher than it was three years ago.

What AI can’t do — and why that matters

AI doesn’t understand your material. It processes patterns in language and data, but has no sense of historical context, cultural significance, or why something matters. It will give you a confident answer that’s completely wrong. It doesn’t know what it doesn’t know.

AI fabricates sources. This is not a rare glitch — it’s a structural feature of how these tools work. Always verify citations.

AI reflects its training data’s biases. The material these models learned from is disproportionately English-language, Western, recent, and digitized. Marginalized perspectives, non-Western traditions, and anything underrepresented online will be underrepresented or distorted in AI outputs. For humanities work — where these gaps are often exactly what you’re studying — this matters a lot.

AI flattens nuance. It’s very good at clear, confident, well-structured text. It’s bad at ambiguity, contradiction, and the complexity that makes humanistic inquiry interesting.

Humanists who develop clear eyes about these limitations — and can explain them to students and colleagues — are doing important work. The debates surrounding AI: about authorship, bias, authority, expertise, and trust — are longstanding humanistic questions. The norms governing how these tools are used in classrooms and institutions are still being formed. Humanists who engage critically can help shape those norms. Those who disengage leave that work to others.

Contribute a sketch

If you’ve tried something with AI in a class or research project at UNM, the AI Sketchbook is the place to share it. Rough experiments and partial results are welcome. Visit the sketchbook for contribution instructions, or reach out at amaranth@unm.edu.