Teaching Hamlet in the Humanities Lab
[This is the revised text of a conference paper I gave in a panel on digital humanities teaching at the Renaissance Society of America (RSA) annual meeting in Washington DC on Friday 23 March 2012. Thanks to Diane Jakacki and William R. Bowen for the invitation to attend.]
This is an interim report on English 203, a first-year seminar for 25 students that I am teaching right now. English 203 is a seminar focused on writing skills and research methods for future English majors at the University of Calgary.
In 2012 and again in 2013, I am teaching the course as an introduction to digital methods of reading Shakespeare’s Hamlet “” or rather, to the digital methods of provoking, testing, and tweaking their hypotheses about the text.
Despite my illustration in this slide, we’ve read the text in print, namely the Arden 3rd edition; we’ve spent 6 of our 36 meetings discussing the play. 5 more were workshop demonstrations of the 5 text-analysis tools we used in the course””
“”namely Wordhoard, TAPoR, WordSeer, Voyeur (Voyant), and MONK. [The list is in the bottom right panel, here.]
Many of our remaining meetings “” nearly 1/4 of our meetings, in fact “” were free-form “˜lab sessions’ when students did collaborative work with those five tools and blogged about it. [The schedule is in the top right panel, here.]
In this paper, I’ll describe how those sessions and those interim writings changed through in three phases.
But first, some rationale. I’m a believer in backward-design principles: you decide what outcomes you want at the end of your course, and work backward to the readings and assignments that meet those outcomes. So I worked out my course-design principles by asking these two questions:
1. What are we teaching when we teach the digital humanities?
2. How can we teach it effectively?
Here’s the what: the lists, visualizations, and other deformations of text that algorithms make possible now. (Each of these images is from my students’ blog entries.)
For texts, and parts of texts, we can classify them by genre or period or any other limitation; we can posit attributions; we can assign linguistic categories to phrases.
For individual words, we can list them by lemma or part-of-speech; we can map their relations and graph their distributions.
All are algorithmic processes that provoke multiple interpretations, as Stephen Ramsay has argued.
What we are teaching when we teach the digital humanities isn’t the deformation itself; it isn’t the tools that generate them, because those will change with time; it’s the thinking that their deformations enable.
What kind of thinking?
It can confirm hunches.
When we can see the growth of first-person-plural pronouns (or “˜we’) from the beginnings to the ends of comedies, as Anupam Basu shows here, and the decline of “˜I’ in another chart, we recognize that comedies are about collective identities subsuming individual ones.
Algorithms can also assert new readings, or in this case confirm one critic’s counter-intuitive hunch.
When we see a “genre [like comedy] visible on the level of the sentence,” as Jonathan Hope and Mike Witmore have done for Othello (yes, comedy in Othello), we identify exactly which of its linguistic features (like complaining, or withholding) that make it comic. The sentence, not the plot.
And so on. These are the ways of thinking that algorithmic processes enable: count all the pronouns, identify all the features.
English 203 has been partly about making sure that “and so on” extends to the next generation of scholars: to enable them to think in the ways that these models exemplify, to grasp the capabilities and argue the implications of text-analysis tools.
Beyond the classroom, “˜algorithmic criticism’ is also the criticism of algorithms. Because it isn’t just the future of Shakespeare studies, or even just the future of the discipline of literary study, it’s the future, full stop. And much of the present.
Like all good criticism, it digs below the surface “” below the blank, serene surface surrounding Google’s search box, for instance. We read and think and shop and live in an algorithmic culture already “” as Ted Striphas argues on his blog and in his book on e-publishing.
But back to my initial questions: what and how. As my emphasis on the algorithms suggests, the answers to these questions are intertwined. We are teaching what DH research is grounded on: curiosity, resourcefulness, provisionality. All the qualities that make this research community so welcoming to new immigrants (to speak from experience).
Openness about our own learning through algorithmic processes models this openness for our students. When I blog about my research, and raise it in class, I pose more questions than I answer. I openly tell my students that I rely on their experience with these 5 tools to decide how best to combine them for my own work.
So bringing my research queries into the classroom doesn’t just make my administrators happy; it meets my pedagogical and my research goals.
Nothing makes you more sympathetic to your students than to be a student yourself, trying to master new research processes. (Similarly, nothing forces you to understand something like the threat of humiliation tomorrow morning when you teach it.) In the final section of this paper, let me be more concrete & talk about the writing assignments in this course “” modelled on the networked knowledge of this research field.
The 25 students in English 203 are each writing a series of blog posts that follow three phases:
- In Phase 1, students are assigned to a team working on one of the five tools. They work together to figure it out, to use it on a common text. And then their posts describe its capabilities and limitations.
- In Phase 2, students are redistributed into five new teams, each team comprising one expert in each of those five tools, and each team working on a different act of Hamlet. When they collaborate now, they combine tools to make arguments about their team’s act of the play.
(So they’ve moved from descriptions, as they learn their tools in Phase 1; to arguments, as they apply and combine them in Phase 2.) In other words, their Phase 1 teams have been disbanded, their members redistributed to seed each Phase 2 team with tool-specific knowledge.
But students can still draw on the expertise and support of their teammates from Phase 1. And that’s because of the categories they assign each of their posts. [See the upper right box in this slide.]
So Phase 1 is about tools, Phase 2 about acts and tools.
- The letter-categories (like A: Wordhoard, halfway down) are for posts about each of the five tools, in Phase 1.
- The number-categories (like 1: Act 1, at the top) are for posts about each of the five acts, in Phase 2. They also add a letter-category for their tool, or for more than one if they’re combining them.
Categories let the students reshuffle the posts to see what their current and former teammates are thinking about, what problems they’ve resolved, what problems remain.
And they’re required to comment on those posts: to offer support, and answers to questions; and to pose new ones. [See the lower right box in this slide.]
The blog posts are based on the networked knowledge model, which is where the how and the what really intersect. Networked knowledge is the new footnote, but a more immediate one; someday it’ll be semantic. It’s the kind of footnote that favours digital resources, but also sends readers directly to them.
But it’s also more, somehow, than the citation of sources: it’s the meshing of diverse disciplines.
Digital humanists aren’t the first inter-disciplinarians “” those were the Renaissance humanists. And there’s nothing intrinsically novel in relying on other disciplines. My future research on encoding Shakespeare has sent me to programmers and computational linguists “” but my last project on exemplarity and anecdotes similarly sent me to the cultural historians.
But digital humanists are the first to do work that was, until now, prohibitively difficult, if not quite impossible. Now, our laptops can run complex algorithms on texts that were digitized only, say, last week. Now, we meet and confer in virtual spaces that are like the RSA, but always-on “” spaces where we can be provisional and consultative about the processes of our work, its interim stages, our protocols and questions.
Writing in the Humanities Lab aspires to this networked knowledge model.
In Phases 1 and 2 that happens locally, among the 25 students. But in Phase 3, starting in April 2012, it happens in the wider community.
The students’ “˜final papers’ (and both are problematic terms) take the form of extended blog posts. These are self-reflexive, cumulative exercises, on broad questions “” like whether the tools are for generating or testing hypotheses.
While reflecting on their learning process, these final posts also make substantive use of any one post in the “˜Editors’ Choice’ category of DH Now. Each student will choose a different post to cite, to add to their network; they claim their post by writing a comment on it.
With time, my students might earn experience and influence in the DH community; but as members already, I hope they will always think differently about the culture of research inquiry and dissemination.
There are some problems with the course as I’ve designed it. For instance, our focus on a single scene in Phase 1 and a single act in Phase 2 is hampering the students by limiting their range; but I haven’t figured out how to introduce them properly to these methods and a wider range of texts in at 13-week term.
I freely confess to the problems that I’ve noticed; I’ve used an editable Google Doc for anonymous feedback from the students, to gather more problems; and I’m optimistic that [in the comments field below] you’ll offer me even more.