History doctoral candidate Edrea Mendoza researches public health sex education initiatives in Mexico in the 1970s. In the course of that research, she encountered a drawing of IUDs manufactured in Mexico over a four-year period during that decade — devices that represented a broader government push for population control. She wanted replicas. It’s one thing to see a line drawing; it’s another experience entirely to hold a replica in your palm and imagine its use.
The web app Meshy.ai uses machine learning to generate 3D-printable files from 2D images. Previous experiments with high-resolution photographs of museum objects had produced distorted results, even when using the multi-image option. But line drawings worked differently: given less visual noise to interpret, Meshy produced accurate representations.
The input is simply an uploaded image. Meshy interprets the drawing, generates a 3D mesh, and exports a file ready for a standard printer. A decent 3D printer costs less than $500, and as of this writing Meshy offers all the features used here at no cost.
When historians present findings, they typically rely on images — slides, reproductions, scans. Sometimes an image connects clearly to the argument; sometimes it remains implicitly related. Objects like these IUDs offer a different possibility: the tactile occupying the same status as the visual in research presentations. AI-assisted 3D generation dramatically lowers the barrier to that kind of work. This workflow can extend to any material culture object that survives as a 2D record — architectural drawings, artifact illustrations, anatomical diagrams.
The AI sometimes ‘corrects’ what it interprets as imperfections. When we uploaded a 2D drawing of a Middleton Cross, Meshy smoothed and regularized the asymmetries that were part of the original design. For objects where exact historical form matters, review the generated model carefully before printing.
Screenshot of what Meshy.ai produced for a Middleton Cross.