Artificial intelligence is reshaping how knowledge is made, organized, and interpreted. The debates it generates—about authorship, bias, authority, expertise, and trust—are debates humanists have been having for decades. What makes a source reliable? Who decides what counts as knowledge? How do systems of power shape what gets preserved and what gets discarded? These aren’t peripheral concerns. They’re central to how AI systems are designed, evaluated, and used—and humanists are well-trained to ask them.
It’s easy to feel that AI is something happening to us, driven by product cycles we can’t influence. But the values, workflows, and norms around AI are still being formed—in classrooms, in research practices, in what we teach students to expect from information systems. Humanists have always shaped how knowledge is made and evaluated. That role doesn’t disappear when the tools change. The question isn’t whether to engage, but how to engage in ways that matter. We think Amaranth can help with that.
For years, technical barriers have kept humanists from building the kinds of digital projects their scholarship deserved. Anything more than a Wordpress site full of ads was basically out of bounds, much less a design-forward digital exhibit, or a collaborative oral history project. Most scholars never cleared that threshold.
AI changes what’s possible. When working with clearly structured, well-documented systems—like Amaranth’s Xanthan framework—AI assistants can handle the technical details while you focus on intellectual decisions. You don’t need to master technical details to adjust a site’s typography. You don’t need to understand databases to organize your images. The technical code is still there (and adjustable once you learn more), but it’s no longer a prerequisite for getting started.
This isn’t about replacing or outsourcing expertise. It’s about letting people with humanistic vision shape their own work and expand their technical skills without spending months learning infrastructure first. And in the process of building something real—a website, a digital project, a public-facing piece of scholarship—you start to understand how to push boundaries deliberately.
We’ve found that working with AI on technical tasks—changing a website layout, formatting a document, troubleshooting code—offers a lower-stakes entry point than asking AI to analyze a historical text or draft an argument. Technical work has clearer success criteria. You can see whether the font is bigger, whether the colors go together, whether the site builds correctly. There’s less at stake intellectually, which makes it easier to experiment.
This matters because many faculty and students approach AI with justified caution—concerns about plagiarism, intellectual shortcuts, or undermining the learning process. By starting with low-level technical support for digital humanities work, we sidestep some of those concerns without dismissing them. Once people get comfortable using AI as a technical collaborator, they’re better positioned to think critically about how it might (or might not) play a role in their scholarship and teaching.
This progression matters beyond the individual project. Faculty who work through it—who understand AI as a collaborator rather than an oracle—bring that judgment into their courses and their advising. Students who build something real with AI guidance are better equipped to navigate AI-saturated environments after graduation. Digital humanities projects become a training ground for AI fluency, but the fluency we’re building has implications well beyond any single website.
We don’t pretend we’re AI experts who have it figured out. We’re humanists who happen to work with AI tools constantly—for technical work, design experiments, troubleshooting, documentation—and we’re learning where it helps and where it misleads. We have a focused, honest use for AI in a specific context, and we want to share what that looks like.
When we work with collaborators, we share what we’re learning: a particular prompt approach, when to ask for explanation, where AI saved us hours and where it sent us into the weeds. None of us have mastered this yet, but we’re all better working and sharing together.
This approach to AI is central to Amaranth’s broader digital humanities work. As we discuss on the Digital Humanities page, AI has dramatically lowered the barrier to entry for digital scholarship. You no longer need to learn a programming language to build something compelling or do sophisticated text analysis. You need good questions, editorial judgment, and the willingness to learn through making.
That shift opens up digital humanities to a much wider community of practitioners. It also creates an opportunity: as humanists become more fluent with AI, they become better positioned to shape how it’s used—in their disciplines, their institutions, and the students they send into the world. Humanists are not just AI users. They’re the people best positioned to ask whether AI is doing what it claims, whose voices it centers, and what it leaves out.
Our work with AI isn’t settled. We’re learning semester by semester, project by project, constantly adjusting what we teach and how we guide people through these workflows. We document what we’re discovering—what works, what doesn’t, where the boundaries are—and share it as we go.
If you’re working with AI in your teaching, your research, or your digital projects and want to compare notes, reach out. We all need to figure this out together.