Talk: Computational Media Design + DH

This is a talk I gave on October 27th at the University of Calgary, “The Interface between CMD and Digital Humanities” about relationships between the Computational Media Design program and the digital humanities on campus. 

Here’s my abstract:

The digital humanities is a range of computational methods for analyzing human-created texts and other artifacts. Digital humanists at the University of Calgary are visualizing ancient settlements, topic-modelling sci-fi archives, and recognizing language patterns in Shakespeare. This talk is about the interface between this field and the CMD program, with an eye to our future collaborations.

Here’s the invitation from CMD Director John Aycock:

As CMD nears a decade from its first inklings to its existence as a fully-fledged program now, it’s a good time for reflection. In particular, where do the boundaries of CMD lie? People in the digital humanities are doing exciting work involving both computation and media of various forms, and Michael Ullyot will be helping start this discussion by telling us about the digital humanities and how they might fit in with CMD.

And finally, here’s what I had to say:


I’ll cover three topics today.

  1. The first is what the domain of digital-humanities research and teaching is, as I understand and practice them.
  2. The second is a broad description of what’s happening in these realms here at the University of Calgary, both in the Faculty of Arts and in its faculty’s collaborative work with Libraries and Cultural Resources (LCR) and with Computer Science (CS).
  3. The third is an invitation for digital humanists and the students and faculty of the Computational Media Design (CMD) program to work together, in the future. What does each group stand to gain from our partnership?


 The Digital Humanities (DH) field encompasses a range of methods and practices, and “is constantly defining and redefining itself.” The broad definition of the Alliance of Digital Humanities, an umbrella group, is “humanists engaged in digital and computer-assisted research, teaching, creation, dissemination, and beyond.”

That work tends to fall into two categories: spatial and textual. The spatial humanities includes virtual worlds and other visualizations of space, including performances in virtual reality — or real-world applications like geotagged archives or augmented-reality apps to see (for instance) historical structures overlaying a real landscape.

The textual humanities, on the other hand, is concerned with the history and future of text: visualization, interpretation, preservation, and so on. For instance, it includes the marriage of traditional close-reading methods (one book at a time) with “distant reading” or macroanalysis of texts (one library at a time). That enables you to address cultural history on a large scale — using text-analysis and text-visualization tools. Or in more general terms, it enables you to study texts in a more categorical, less exemplary and selective way. Instead of the 50–70 books (say) that a well-read expert can analyze simultaneously, it enables you to run programs across the 70,000 books printed in English before 1700 (say).

But there’s a third feature of DH that distinguishes it from the analog humanities. Take our tell-trained expert, for instance. He might publish a book after five years of thoughtful writing and revision and peer review, and that book will represent the culmination of his research process. But the digital humanist in the next office will publish her dataset online, write 14 blog posts on her workflow, and post 173 tweets (#digitalhumanities) with interim discoveries and queries to like-minded colleagues before her book appears in five years.

This is, in short, a methodologically open field — not because DHers are publicity-seekers but because our work is profoundly, necessarily cross-disciplinary. And we’re less practiced, more consultative, in those other fields. My Ph.D. in 17th-century English Literature in 2005 never prepared me to write a data management plan, or to learn TEI-compliant tagsets for my corpus, or to debug a Python script. For that, I train through the network of THATCamps and read the latest news on DHNow and post questions to DHAnswers.


And I talk to people at my institution, those in the humanities and social sciences who use these methods — and the data and information scientists who help me wrangle and regularize and process my data.

I can’t do justice to the many projects here at the University of Calgary, so I won’t try. Instead of pinpointing specific projects in a show-and-tell exercise, I’ll draw a shape of the DH field as I know it to exist here. I’m trying to focus on common methods rather than list a series of project domains and outcomes. This is the sense of what’s happening that I’ve gleaned through a few years of convening and listening to the DH community on this campus. (If you’re reading this and I’ve missed you, just drop me a line.)

One group that meets regularly is the Text-Analysis Interest Group, convened by me and Stephen Childs. We talk about common methods in text and computation: Natural language processing, corpus linguistics, topic modeling, and how to run a sentiment analysis on social-network or other data. The group includes members from the Faculty of Arts, from LCR (Spatial & Numeric Data Services) and from CS. Dennis Storoshenko, for example, works on the natural-language toolkit for text corpora.

What kinds of work do researchers in the Faculty of Arts use all of these skills to do, you ask? There’s my own Augmented Criticism Lab, which uses pattern recognition of rhetorical figures to ‘augment’ or enhance literary criticism — namely the stage of evidence-gathering that comes before analysis. For example, we have a computer script that uses Regular Expressions to search a large text-corpus for figures of repetition and variation. (Read here for more.)

Then there are my English colleagues. Stefania Forlini’s project, “The Stuff of Science Fiction: An Experiment in Literary History” (in collaboration with Uta Hinrichs at the University of St Andrews), is running visualization experiments on the physical forms and verbal contents of a collection of speculative fiction from popular periodicals. Murray McGillivray has published a digital facsimile and edition of medieval poetry and has begun work on another edition, of a collection of 29 saints’ lives. (He also pioneered the online teaching of Old English for the department.) Karen Bourrier, hired explicitly as a scholar of Digital Humanities, has a peer-reviewed database of Victorian images and texts on mental and physical disability.

We teach graduate courses in these fields, like Karen Bourrier’s “Digitizing Victorian Women Writers” last year, whose students used Omeka to curate and publish data. I’ve supervised two graduate projects in 2016: one to create a digital edition of James Joyce, and another to visualize the metadata on a subgenre of early modern books — and next year I’ll teach a graduate course on The Future of Reading, which will use text-analysis tools to generate critical interpretations. One of its questions is whether or not we need to squeeze nuanced, qualitative texts through a binary, quantifiable filter — or if these tools can, rather, find evidence of things that were always true about these texts, just unnoticed by humans’ linear readings.

We also teach undergraduate courses in the DH field. My colleagues Jason Wiens and Karen Bourrier encourage students to look past the expository essay to digitize materials (poems, novels, diaries, letters, maps) from local archives for online exhibits using Omeka. There have been honours theses in English on hypertext fiction and digital editions. And this year we’ll offer a 400-level course in Digital Humanities for the first time.

As for local infrastructure, the University of Calgary Library has broken digital ground on a virtual home for digital projects, with Lab NEXT: where scholars can plan projects, find and analyze data, distribute data, and leverage local resources for computing and visualization.


So what are the next steps? What might 2017 bring for this partnership?

In his opening remarks today, John Aycock asked us to think about where the boundaries of CMD lie, and what you might gain by including the digital humanists’ research methods and teaching goals.

In a word, I think the answer is textual expertise. I understand the CMD’s boundaries to encompass everything at the intersection of computation and media studies — and mostly to focus on spatial, visual, and performative outcomes. Its projects often include text, like in The Bohemian Bookshelf, but not explicitly.

Here are four provisional pathways for our interests to combine:

  1. Common research methods. We are all necessarily cross-disciplinary, bridging the divide between computer science and the fine arts, the humanities, and the social sciences. I only propose to enhance the presence of the latter two in CMD. We both gain with increased conversations about data curation, about project management, and about programming and data science for humanities and social science researchers.
  2. Common research outcomes like text discovery and visualization. The Bohemian Bookshelf, as I mentioned, is a great example of the ways CMD is already working with texts in comparable ways to the macroanalysis or distant reading of text corpora in DH.
  3. Common undergraduate teaching and learning goals. We’re both interested in moving past digital literacy to new methods for students to curate data and open inquiry. For example, D’Arcy Norman is working on a Ph.D. project to measure which learning activities promote different outcomes for students — making computational descriptive field notes for qualitative observation research through hardware and software.
  4. Common goals like a rethink of graduate education, particularly in the Arts. We’re both interested in training students to think beyond the academy, beyond the paths that most faculty followed, to careers in media, design, and publication. We both want to foster skills like the rhetorical acumen to interpret and shape language, to interpret the kinds of sentiments that social media can capture. We both want to leverage ancient traditions like representation and textual analysis for understanding present and future media.

So I propose that the CMD program expand its affiliations with the Faculty of Arts to include the humanities and social science disciplines and departments where DH is most active. I propose that if you let the digital humanists in, we both will benefit. CMD will expand its expertise in text technologies (in digitization, curation, and text-analysis) and will shape the physical and intellectual infrastructure that enables this work on campus. And we digital humanists will partner with an established, formalized program of research and graduate teaching whose media domain is intermixed with ours.

If the digital humanities is successful, it will disappear — into the ordinary methods of humanities married to computer science, in a cross-disciplinary interface that CMD does better than anyone. This is a logical and natural next step, for both of our programs.

Leave a Reply