Is encoding text an act of literary interpretation, or of pattern recognition? Either way, is it quantifiable? And if so, can a computer do it as readily as a human reader?
Those are just a few of my questions after a week-long course in text encoding at the Digital Humanities Summer Institute 2011, with the wonderful Julia Flanders from the Brown University Women Writers Project, Doug Knox from the Newberry Library, and Melanie Chernyk from the Electronic Textual Cultures Lab at the University of Victoria. We learned how to encode texts in TEI. That means taking texts that look like this —
What are digital humanists doing now with early modern books and manuscripts? Ann M Blair recently argued that medieval and early modern systems of “managing textual information in an era of exploding publications” are precedents for modern information management systems. Do early reference books, annotations and compilations inform, anticipate, or otherwise influence our computer-assisted thinking?