Michael Ullyot http://ullyot.ucalgaryblogs.ca Ideas + Materials for Research + Teaching Tue, 10 Apr 2018 20:33:27 +0000 en-US hourly 1 61755900 Teaching with YouTube http://ullyot.ucalgaryblogs.ca/2018/04/10/youtube/ http://ullyot.ucalgaryblogs.ca/2018/04/10/youtube/#respond Tue, 10 Apr 2018 18:36:51 +0000 http://ullyot.ucalgaryblogs.ca/?p=5814 Apparently I’m in the 20% of YouTube’s 1.3 billion users who don’t watch it regularly. But I am among the 50 million who upload content to it. Since 2010 I’ve produced just five videos, whose collective 84 minutes is a […]]]>

Apparently I’m in the 20% of YouTube’s 1.3 billion users who don’t watch it regularly. But I am among the 50 million who upload content to it. Since 2010 I’ve produced just five videos, whose collective 84 minutes is a drop in the ocean compared to the 300 hours of video uploaded each minute. It would take you 60,000 years to watch YouTube’s entire back catalogue.

You’d better get started! Here’s a humble suggestion: my channel has this narrated slideshow on the Elizabethan stage, which gets about 270 views per year. (Woo!) Back in 2010 I was teaching courses on Shakespeare and other playwrights of his era, so I used this recording rather than repeat myself. None of those courses had an enrolment even close to 270.

My successful experiment was inspired by the innovator and provocateur Cathy Davidson, who later said that “If educators can be replaced by a computer screen, we should be.” She’s right. If all we do in the classroom is repeat what we’ve said before, we’re doing our job wrong; we ought to do what we can only do when we gather people in a room. So my students and I work through problems and questions together, informed by our shared knowledge — thanks to their assigned viewing, I should say — of Elizabethan theatre.

This term, I’ve made three more videos that should apply to wider audiences: students trying to figure out how to read and write about literature like a professor.

Why? Whether or not they’re English majors, students struggle to understand just what’s expected of them in English courses. I recently read Literary Learning, and I’m teaching with Digging into Literature: two books that help English professors identify and teach the essential skills of our discipline: reading texts, and quoting them in essays.

Those are the topics of my second and third videos; the first is on avoiding grammatical mistakes, because we could all use a refresher course in writing correctly.

You can find all three videos on my channel — and if you’re keen, linger to learn a little about Shakespeare’s theatre.

A journey of 60,000 years begins with an 84-minute step.

http://ullyot.ucalgaryblogs.ca/2018/04/10/youtube/feed/ 0 5814
Text Accordians http://ullyot.ucalgaryblogs.ca/2018/02/16/text-accordians/ http://ullyot.ucalgaryblogs.ca/2018/02/16/text-accordians/#respond Fri, 16 Feb 2018 19:48:13 +0000 http://ullyot.ucalgaryblogs.ca/?p=5667 I write, with my keyboard, all day. Every day. E-mails, lecture notes, grant applications, status updates, first drafts, second drafts, slideshow bullets, blog posts. To paraphrase the great Johnny Cash, I type everywhere, man. And along the way, I find […]]]>

I write, with my keyboard, all day. Every day. E-mails, lecture notes, grant applications, status updates, first drafts, second drafts, slideshow bullets, blog posts. To paraphrase the great Johnny Cash, I type everywhere, man.

And along the way, I find I quite often need to write the same words and numbers. I close every e-mail the same jaunty way (“yours, Michael”); I give students the same directions to my office; I repeat the same writing advice in my grading; my phone number hasn’t changed in a decade.

One day someone in my department mentioned a tool that saved time with these repeated snippets, called TextExpander. It runs in the background, and watches what I’m typing. Whenever I type an abbreviation, TextExpander replaces it with the text that I’ve set it to. For example, when I type :mu it replaces that with Michael Ullyot – which, unsurprisingly, I type often. When a student requests a reference letter, I reply with :rf to send them a list of what I need. And so on. My abbreviations all start with a : so I don’t trigger them when I’m typing normally, when I wouldn’t type a : without a space after it.

I also use some of TextExpander’s fancy features, beyond these copy-paste tricks. If I want today’s date, I type :da and get 16 February 2018; or yesterday’s (:dy > 2018–02–15); or next Friday’s (:nf > 23 February). Those dates are in different formats, because I set them that way.

In the last 7 years, as of 2018–02–16 (see how I did that?) I’ve expanded 34,009 snippets, which saved me from typing 754,990 characters, which would have taken me 31.46 hours of my life.

But this post isn’t (just) my enthusiastic product review. There’s something I’ve been meaning to add to my TextExpanding skillz, and today I finally did it: transforming texts on the clipboard, using Javascript. Wait: before you stop reading, hear me out.

If you’ve ever had to retype something in upper case, or to sort a list alphabetically, or to underline a word or a line for emphasis, you shouldn’t. Computers are really good at simple transformations, and your time is better spent on higher-level work.

Take the opening lines of my favourite Johnny Cash song again, “I’ve Been Everywhere. I’ll copy these four lines to my clipboard:

I’ve been everywhere, man
I’ve been everywhere, man
Crossed the deserts bare, man
I’ve breathed the mountain air, man

Then I just type :jcu and get this:


Or :jcl for lower-case:

i’ve been everywhere, man
i’ve been everywhere, man
crossed the deserts bare, man
i’ve breathed the mountain air, man

Or :jct for title-case:

I’ve Been Everywhere, Man
I’ve Been Everywhere, Man
Crossed The Deserts Bare, Man
I’ve Breathed The Mountain Air, Man

Which is great, if I need to do short text-transformations. There’s another to double-space between sentences, and you can do whole paragraphs in an instant. I won’t demonstrate it here, but I’ll show you two more tricks.

Say I need to alphabetize a list, like this one of the first few places Johnny’s been:


Type :jas and hey presto, they’re alphabetized:


Or :jar for the reverse order:


Finally, what if I want to underline a sentence? Take this example:

I’ve been everywhere, man

Copy it to my clipboard, and type :jul~ to get this:

I’ve been everywhere, man

Or type :jul+ to get this:

I’ve been everywhere, man

You get the picture. TextExpander takes what’s on the clipboard, changes the text, and pastes the output. Or it checks the system calendar and outputs the date of next Friday, or next week, or yesterday. If you’re doing these calculations and transformations yourself, or even if you’re just constantly typing the same texts yourself, stop it.

To close, a credit roll. I didn’t write these Javascripts myself; they’re from Thought Asylum. The suggestion to try TextExpander (for grading) came originally from Harry Vandervlist, my colleague in the Department of English at the University of Calgary. And finally, spend some of your newfound free time watching this Google Map animation of Johnny Cash’s travels chronicled in the song. Happy trails.

http://ullyot.ucalgaryblogs.ca/2018/02/16/text-accordians/feed/ 0 5667
Roald Dahl’s Stories for Adults http://ullyot.ucalgaryblogs.ca/2017/12/26/roald-dahls-stories-for-adults/ http://ullyot.ucalgaryblogs.ca/2017/12/26/roald-dahls-stories-for-adults/#respond Wed, 27 Dec 2017 03:27:40 +0000 http://ullyot.ucalgaryblogs.ca/?p=5603 “I’ll bet you think you know this story. You don’t. The real one’s much more gory.” Roald Dahl wrote this about the tale of Cinderella in Revolting Rhymes, but it also applies to the stories he wrote for adults from […]]]>

“I’ll bet you think you know this story. You don’t. The real one’s much more gory.” Roald Dahl wrote this about the tale of Cinderella in Revolting Rhymes, but it also applies to the stories he wrote for adults from 1944 to 1988. “Nobody in their right mind would want to be a character in a Roald Dahl short story,” writes Anthony Horowitz (2.x). This author of beloved children’s books was known as ‘the master of the macabre’ for the twisted imagination he reveals in stories abounding with cruelty, lust, madness, and murder.

In English 201 (Winter 2018) we will read twenty of those stories, and analyze them using the methods outlined in a supporting textbook.

By the end of this course students will be able to:

  • Make original and persuasive arguments about literature.
  • Use a variety of interpretative strategies for analyzing literary texts, including close readings.
  • Organize a complex argument about a text with a clear thesis statement, focused topic sentences, and fully interpreted quotations.
  • Document their quotations using Modern Language Association (MLA) citation conventions.
  • Reflect critically on their reading and writing processes.


  • Roald Dahl, The Complete Short Stories, Volume 1 (1944-1953) and Volume 2 (1954-1988)
    Joanna Wolfe and Laura Wilder, Digging into Literature: Strategies for Reading, Analysis, and Writing

Course Outline

Download PDF

http://ullyot.ucalgaryblogs.ca/2017/12/26/roald-dahls-stories-for-adults/feed/ 0 5603
TEI for Close-Readings http://ullyot.ucalgaryblogs.ca/2017/11/10/tei2017/ http://ullyot.ucalgaryblogs.ca/2017/11/10/tei2017/#respond Sat, 11 Nov 2017 04:07:16 +0000 http://ullyot.ucalgaryblogs.ca/?p=5513 This talk is prescriptive and theoretical, rather than descriptive and practical. I’m going to advocate for the benefits of implementing TEI to standardize students’ markup of close-reading terms, but I haven’t actually done this. The closest I’ve come is using […]]]>

This is the paper that I delivered on 13 November 2017 at the Text Encoding Initiative (TEI) annual meeting at the University of Victoria (British Columbia). Here’s the PDF of my slideshow, whose images intersect my script below.

This talk is prescriptive and theoretical, rather than descriptive and practical. I’m going to advocate for the benefits of implementing TEI to standardize students’ markup of close-reading terms, but I haven’t actually done this. The closest I’ve come is using Google Docs, and then LitGenius, to compile student annotations, but not to channel them through a standard like TEI.

I know I can do better, both for better learning outcomes and better research outcomes. So I’m here to present a plan, one that needs your advice and guidance to realize. (This past weekend I’ve taken two workshops and had many conversations about implementing different parts of this plan: first with Janelle Jenstad and Joey Takeda, and then with Martin Holmes.)

So here it is: a framework to standardize TEI markup of the terms that scholars use when close-reading texts.

Why would we want to do that? There are two main purposes:

  1. first is the pedagogical purpose of training readers, namely students, to annotate texts with interpretive metadata at the word level (often, multi-word level);
  2. and second is the research purpose of building a training set for supervised machine learning systems (and someday, unsupervised machine learning systems) to recognize those text features that we human readers can find more or less naturally.

Now, I recognize I’ve just made quite a leap: from trusting novice readers (#1), to trusting robots to automate me out of my job as a literary critic (#2).

I’m not advocating either of those things, not without a lot of human experts to examine and verify what the students and the machines annotate:

  • this system will need humans to verify students’ metadata (so they don’t mis-label terms — mistaking synecdoche for metonymy, say); this is crucial because errors in that metadata will propagate in the next stage;
  • and then, in that next stage, this system will need humans to guide the machine learning process, to correct their errors and confirm their results with each iteration of that process.

But I’ve still not specified the ultimate payoff. Why teach a machine to close-read texts? (Or more precisely, to encode texts with close-reading terms that mimic human annotations?)

I hear the humanists protesting: “Is nothing sacred in your techno-deterministic future? Doesn’t close reading makes us human?”

It does, and that won’t change. All that will change is what we will read.

Right now, literary critics are prisoners of context. We take a book from the shelf or we load a play from the ISE library, we read it sequentially, and we make an argument from the patterns we recognize in it. We choose those books based on their cultural capital, maybe their authorship or their canonical status or their recommendation by authorities we trust.

And our arguments are brilliant in their particularity: we can understand Shakespeare’s rhetoric because readers from Miriam Joseph to Frank Kermode to Jonathan Hope have described its forms in detail.

But we don’t understand rhetoric, or metaphor or personification or symbol, in trans-contextual ways. We interpret these phenomena as particularities assembled by a given author or text, but not as abstracted, trans-textual phenomena.

Whether or not you agree with that methodological goal, I argue that we should have the option. Or at least, we should have the ability to see how Shakespeare’s rhetoric compares with other writers’ rhetoric. And our choice of those writers shouldn’t be arbitrary; it should be wide-ranging and it should be as objective as possible.

This is what my collaborators and I call ‘augmented criticism’: taking what critics do naturally, noting textual features, and expanding our grasp of comparable features.

Is nothing sacred? Of course: time-honoured critical habits are sacred. We read to gather examples, and to make arguments from them. This extends our reach to more examples.

We uphold those habits through practice, but also by teaching them to the next generation.

No matter how often we illustrate its terms and tropes, there’s no better way to teach close-reading skills than making students try it: reading with a pen in their hand, noting local features and identifying broader patterns.

Their individual results will vary, but: the aggregate result will be a working consensus about the patterns and variations in a text that seem to reveal the writer’s deliberate choices, or have the strongest effect on readers.

Here is a set of text features, grouped into four categories. (This is a subset of a much longer list, available here.)

For TEI encoders, the categories aren’t too important; but for readers, the categories help distinguish between different modes of address, different mental habits or (you might say) filters that they bring to a text each time they read it:

  • structural terms for the relationships between words, mostly in poetry;
  • linguistic terms for a text’s surface-level features;
  • semantic terms for more connotative features, below the surface; and finally
  • cultural terms for broader features, some of them pointing outside the text.

By the way, I’m using the word ‘text’ as if it applies equally to all forms and genres. But a glance through this list will uncover my biases and teaching habits: I developed it as a teaching tool for students reading Shakespeare’s sonnets (and it’s actually longer than this, but I’m already cramming too much text on a slide).

My list of terms and categories is both provisional and incomplete: with the aim only of starting a taxonomy of interpretive tags.

Now, some of you will protest: how can we encode something so interpretive as ‘tone’ or ‘irony’ or ‘paradox’? And even if we could agree that it obtains somewhere in a text, exactly which words or characters would be contained in these tags?

I don’t have an answer, but if a text has a given tone it can only exist at the level of words (what other level is there?); so agreeing on which words is an interesting, but secondary, problem.

Okay. Forgive me for this rough division: but if we imagine that every term is somewhere between objective and subjective, then let’s start (at least) by encoding more objective features, a few of which are here on the left.

  • A simile is a metaphor using the word “like” or “as”, so “my love is like a rose” is undeniably a simile.
  • Whereas a metaphor (“my love is a rose”) is much subtler: it only requires a writer to yoke together two unconventional images.

Incidentally, it’s really tempting to think that if we fed enough metaphors into a machine we could ‘teach’ it through juxtapositions of synonyms (or something) to detect metaphors automatically. Maybe we could, but the problem with unconventional devices is that they’re, well, unconventional: there’s no universal formula for something that’s by nature anti-formulaic.

And if you think that challenges my second purpose of about machine learning, you’re right. It’s the core problem with algorithmic criticism: texts are slippery.

Okay, then: so start with the low-hanging fruit. Set aside your irony and symbol tags and start with something like enjambment: that is, a line of poetry that flows over the line-end barrier into the next line. Or start with repetition: words repeated, sometimes with variation, for effect. Surely these are features we can agree on, right?

Let’s look briefly at two categories that seem objective, and usually behave conventionally: rhetorical figures, and rhyme.

My work for the past few years has been automating the detection of rhetorical figures: those repetitions and variations of diction and syntax that lodge themselves in your memory, that sound deliberate and purposeful, that are beautiful and compelling.

Consider chiasmus, the inverted repetition of two words or ideas, AB|BA:

  • “Fair is foul, and foul is fair.”
  • “Ask not what your country can do for you, but what you can do for your country.”

Or gradatio, the sequential chain of words at the beginnings and ends of clauses, AB|BC|CD:

  • “Pleasure might cause her read, reading might make her know, knowledge might pity win, and pity grace obtain.”
  • Or, more simply: “She swallowed the bird to catch the spider, She swallowed the spider to catch the fly.”

We can find these readily enough; you can read my other posts on how we did that.

What about rhyme? Is it objective or subjective? Does it behave conventionally?

Rhyme isn’t a simple binary; there are eye-rhymes and sound-rhymes, and pronunciations change over time. I think of the final couplet from Shakespeare’s Sonnet 116, rhyming “proved” and “loved”: in the Elizabethan Original Pronunciation those words audibly rhymed. So there are degrees of rhyme certainty.

Here’s a more recent example, from Module 4 of “TEI by Example.” I’ve added its rhyme scheme, ABAB CDCD EFG EFG.

And here is how TEI by Example advises we encode that scheme: as the value of the rhyme attribute in the line-group element; and for good measure, as the value of the label attribute in the rhyme element, around each line.

Let’s return to basics: reading and writing. I can’t interpret anything I read without annotating it: marginalia is the original markup.

This is my copy of Shakespeare’s Henry V that I annotated for my students, to demonstrate my habits of close-reading.

I used this to write a short model essay on the same passage (which was harder to write than I remembered!), and then assigned my students other passages in the play.

This is just one way to demystify close-reading. Others from disciplinary guides (like Wolfe and Wilder’s Digging into Literature) include the think-aloud, whereby you record yourself doing the thinking behind these annotations.

Another is to aggregate students’ annotations. I’ve tried the Lit Genius model, a web interface designed for music lyrics (as you can see: I promise this is the first time I’ve used Taylor Swift in a conference paper!); but it can easily be repurposed with an educator account.

The advantage is that it’s a well-designed web interface; but the catch is that you lock the annotations into their hosted system, and they’re not exportable; nor is their encoding transparent; and the functionality is limited. You can’t standardize the annotations, or toggle them by category, for instance.

And then there’s XML. Far more customizable, as we well know; but far less smooth and serene than the Taylor Swift interface.

With a custom TEI schema, and an expanded and repurposed tagset, I can compile my students’ annotations of a common text’s features in order to compare and verify them. TEI standards will ensure they’re interoperable with those beyond my classes, eventually.

That helps me address a question that’s come up a few times at this conferense: when do you use TEI rather than a home-cooked markup language?

My answer: Whenever your learning or research outcomes require the rigour of standardization, either across the classroom or across the larger field.

So what combination of TEI elements, attributes, and values will get us there? This is where I need your advice most.

I learned this weekend that stand-off markup will be necessary because of overlapping hierarchies: because you’ll have (say) repetitions that spill over the edges of rhetorical figures; and enjambment elements that overlap with rhymes.

This is an instance of chiasmus (the rhetorical figure AB|BA) in Henry V: first in the text, and second in the TEI.

I’ve used the element from the TEI’s core module, with the type attribute’s value naming the figure and its number in the text; I’ve wrapped that around the self-closing element from the analysis module. Its two attributes’ values points to the beginning and end of that red line in the text; you may have noticed in the last image that each word had a unique xml:id.

I can’t say this is the best system. There’s also the element in the analysis module, and in the tei module there’s something called an att.interpLike class (which I don’t understand) that offers an attribute ‘@type’ whose values can include image, character, theme, or allusion.

So, clearly there’s the potential for me to adapt systems that already exist.

Finally, what interface would best capture this markup from students? I don’t know yet. Students have a productive anxiety about raw XML, because they (and we) live in an edited world, with coded reality behind the serene surfaces.

So do we make students swallow the red pill, and lift the veil? Or the blue pill, and follow the LitGenius path?

I’ve said much more today about student learning than about machine learning; this is a pedagogy panel, after all. And honestly, it’s premature to make more ambitious plans before I get this aggregation system right.

But indulge me for a moment, and consider why we would want to make those plans.

A well-annotated library of close readings could serve as a training set for machine learning, to enable machines to detect these features automatically. It’s easy to imagine starting with low-hanging figures of speech, as I have with rhetorical figures, before progressing to higher-level figures of thought: the metaphors and allusions that require human readers, at least for now.

Set aside the pragmatics for now. (“Damn it, Jim! I’m a critic, not a computer scientist.”)

Think instead about how automated detection of text features might train human readers. Not overtly doing the interpretive work for them; this isn’t an answer-key or shortcut for lazy readers. No: think about how you might mark up a passage in a digital edition, or annotate a sonnet, and trigger a recommendation algorithm like Lucent’s MoreLikeThis, along the Netflix/Spotify model: if you like this passage, here are some other with similar features.

You know the Netflix-style textual subgenres (like “gritty coming-of-age heist comedies” or “post-apocalyptic musicals with a strong female lead”). So for poetry, imagine:

  • ironic-tone ABCBA-stanzas with chiasmus
  • personification in prose allusions to King David

Why would you want such recommendations? To escape the narrow, arbitrary particularities of human readings. To break the canonical grip of Shakespeare or Herman Melville on our arguments. And to make more persuasive, definitive arguments with wider-ranging evidence, not just with the books we happen to have read.

http://ullyot.ucalgaryblogs.ca/2017/11/10/tei2017/feed/ 0 5513
Varieties of Chiasmus in 68 Plays http://ullyot.ucalgaryblogs.ca/2017/10/21/chiasmus/ http://ullyot.ucalgaryblogs.ca/2017/10/21/chiasmus/#respond Sat, 21 Oct 2017 21:11:38 +0000 http://ullyot.ucalgaryblogs.ca/?p=5422 This paper had the more ambitious title, originally, of “Motives for Rhetorical Figuration in the EEBO-TCP Corpus.” Now it’s focused on just one figure, and in fewer texts. I’ll use 68 plays as a proving ground for my methods (described […]]]>

This is an expanded version of the paper that I delivered at the Pacific Northwest Renaissance Society meeting in Portland, Oregon on 21 October 2017. You can download the slideshow in PDF. Two earlier posts in this series address the problem, and the programming methods I used to address it.

This paper had the more ambitious title, originally, of “Motives for Rhetorical Figuration in the EEBO-TCP Corpus.” Now it’s focused on just one figure, and in fewer texts.

I’ll use 68 plays as a proving ground for my methods (described in two earlier posts: here’s the first, and here’s the second ) before I apply them to the larger EEBO-TCP corpus . These plays are a good place to test and refine my methods because they contain language in a range of registers on a variety of subjects.

And I’m focusing on chiasmus because it repeats words in close proximity — but truthfully, it’s also because there’s already an excellent tool to find it. (None of this research would have been possible without Marie Dubremetz’s chiasmus detector program .)

This paper has four parts: some reflections on why it’s worth searching for rhetorical figures; some examples to define antimetabole and chiasmus; some thoughts on the benefits of using a machine to find them; and finally some results from my methods.

Antimetabole can be defined as the literal form of chiasmus, the X-shaped figure of speech that repeats ideas or synonyms in inverse order. The difference between the two is that antimetabole repeats words, while chiasmus repeats ideas. So all antimetaboles are chiasmic, but not all chiasmi are antimetabolic. (See that? I explained the difference using the figure itself.)

Here is a Shakespearean example of each: the first, from Macbeth, is an antimetabole; the second, from Othello, is a chiasmus. (If you want more examples, take a look at my first post in this series.)

Okay, so why look for figures? There are two answers: because they are beautiful, and because they are cognitively significant.


Figures are beautiful, or at least compelling, because they lodge themselves in the memory. They are displays of the speaker’s virtuosity, or skill with words — and hence with ideas. Figures are the patterns of repetition and variation that make language memorable, compelling, and beautiful.

They are, for instance, the part of a political speech most likely to be excerpted as a sound bite. Consider an example from this very week: on 24 October 2017, Arizona senator Jeff Flake’s impassioned speech denouncing the tone and character of political discourse used an anaphora, or a figure in which the speaker repeats a word at the beginning of successive clauses: “I rise today with no small measure of regret. Regret because … Regret because … Regret because … Regret because … Regret for ….” It’s a compelling structure, underscoring how many things there are to regret.


Figures are also cognitively significant, which means they reflect habits of thought in both the speaker and the audience. Senator Flake thinks in lists; you can imagine the bullet points in his drafts of this speech. And when listening to a speech, audiences like lists, too: they give us purchase on ideas; they reinforce what we heard a moment ago but might have forgotten. (“I rise today …” is a conventional opening, but Flake’s sentence ends with the word at the core of his message.)

Raphael Lyne, in his 2011 book Shakespeare, Rhetoric and Cognition, discusses anaphora in Shakespeare. (For example, “Some glory in their birth, some in their skill, | Some in their wealth, some in their body’s force,” from Sonnet 91.) Lyne writes:

“Even something as mechanical as anaphora – [a figure] where the same words are repeated to begin successive clauses – might testify to some sort of exploratory or categorical structure, wherein thoughts are managed and perhaps also instigated.⁠”

You might say that my goal in this project is to turn Lyne’s “might” into something more decisive. Even a “mechanical” figure, complicated by repetition and variation, can testify to a speaker’s cognitive processes: listing, elaborating, enumerating.

Or in the case of antimetabole, the speaker is reversing and inverting received ways of thinking.

How so? Consider the famous line from President Kennedy’s 1961 inaugural address: “Ask not what your country can do for you…” Everyone can recite the end of that sentence, as I wrote in my first post in this series. It’s not just a stylish, compelling, elegant inversion; its substantial purpose is to make you think differently. Simply by inverting his words, Kennedy tells his fellow Americans to ask a public-spirited question, rather than a private-interest question. And it worked: the 1960s witnessed the rise of the Peace Corps, the Apollo program, and the Great Society.

What about the characters of early modern drama? They have all kinds of different motives, but most of them are also trying to make you see things differently. (Okay, one last plug: I discussed motives in my first post.)

Beyond Context

It’s also worth moving past these contexts, past the early 1960s or late 2010s, past the circumstances of characters addressing other characters and audiences. I want to understand, more broadly, how Shakespeare and his contemporaries use antimetaboles. What forms do antimetaboles take, both normally and exceptionally? For instance, do they more commonly repeat nouns, verbs, or other parts of speech? Do they ever invert more than three words? How dissimilar can those words be? And finally, how does antimetabole interact with other figures?

Answering these questions means I’ll have to find as many instances of antimetabole as possible, and that means I’ll have to use machines to do it. I could, of course, read through texts looking for them, or look through compilations of examples. And there’s no shortage of resources on Shakespeare, including Mariam Joseph’s classic Shakespeare’s Use of the Arts of Language (1947; repr. 2008).

In other words, I could begin by making lists, the way scholars begin every literary-critical project. What difference does it make whether I read plays to compile our lists, or trust sources like Joseph, or use a machine? There’s little reason to choose; I could use all three methods. But if I can set reliable criteria, there’s no reason not to trust machines to do the list-making for me. Or to put it another way, what do I have to lose?

What I lose

Context, for one. I won’t know (necessarily) what characters are talking about, or what motives inform their words. But there’s no reason I couldn’t simply read the plays, if I need to know this. So context is lost in the first instance, but not precluded in the second.

Authority, for another. I won’t be able to point to Miriam Joseph or Frank Kermode, or Brian Vickers or Raphael Lyne, as authoritative critical sources of the antimetaboles I discuss. Or to manuals by Henry Peacham or George Puttenham or Thomas Wilson, as authoritative early modern sources; the word ‘primary’ doesn’t really suit these compilations.

But does the value of precedents matter more than the authority of a list gathered with a machine? It comes down to your preconceptions: are you more or less willing to trust a machine, or a human, to find linguistic patterns in texts? If it’s a solitary, canonical text (say, Macbeth), then I’m for Kermode. If it’s 68 texts, then I’m ready to distribute the burden equally between Kermode and the machine. If it’s 70,000 texts, then there’s no contest.

The good news is, robots are not targeting our endowed chairs in literature. Not yet, anyway. Human experts will have to assess the outputs of any machine process, particularly when it scans large text corpora for more complex figures. Humans will have to verify the results and pronounce them sound, before we compare them and synthesize them into plausible claims and convincing arguments.

But like word processing or database searching or even just Googling, machines’ capabilities can augment our thinking and extend our abilities.

What I gain

My premise is that rhetorical figures are limited variations on a simple theme. They are linguistic structures of repetition and variation.

At least, the simple figures are: the ones we hear from the witches in Macbeth, or the senator from Arizona; these are the ones Raphael Lyne calls “mechanical.” I can start there, because simple figures will test my premise. If I can reliably set, and adjust, the criteria for antimetabole, then I can trust the process.

(It’s worth mentioning that an even deeper premise is that ‘setting the criteria’ is the right approach. This is rule-based, not evidence-based, processing — a distinction with ramifications for any computational problem, including literary criticism of this kind.)

So: set the criteria, run the process, make arguments from the outcomes. Simple enough, right? The hard problem here is that I’m triangulating criteria (e.g. the formula for a figure) with questions (e.g. how do figures operate?) with arguments (e.g. figures are interesting and significant). If I can reach toward all three corners of the triangle, at least partway, I’ll have a successful argument.

It begins with plain texts of all 68 plays. Here’s what one looks like, stripped of its apparatus like speech prefixes or stage directions or act-scene divisions. These are just the words from Act 1, scene 3 of Richard III, when the king implausibly convinces his victim’s widow to view him favourably.

Sharp-eyed readers, or those who simply click on the image, will see that there’s actually an antimetabole in this passage; I’ve highlighted the repeated words for you. It’s possibly problematic because it’s exchanged between two characters, but set this problem aside for now.

I used two main text corpora: the Folger Digital Texts for Shakespeare’s plays; and the Folger’s Digital Anthology of Early Modern English Drama’s subset of 30 featured plays. These were more than ample for my purposes; in fact, I’m hardly able to do their figures justice in this paper.

The direction I want to take this research is outward to the EEBO-TCP corpus, containing every text printed in English before 1700. That’s about a billion words altogether. Somewhere between that billion and the 2.7 million words (or 2,714,018 to be precise) in these 68 files, there might be an in-between stage.

The Folger’s EMED anthology features 30 plays, listed here, that have been edited with regularized spellings (of the 403 in total).

That’s one reason I chose them; the other is that Mike Poston at the Folger generously prepared files of just the speeches in these 30 plays for me, which is something I can’t do myself yet. My next step is to download more files from their Corpus Download page and see what their contents afford. Then, for instance, I could compare plays from the 1580s with those in other decades; or plays by John Webster with those by Thomas Middleton; or tragedies with comedies.


Now for part 4 of this paper. Enough with the preliminaries; what were my results?

Not surprisingly, Shakespeare’s antimetaboles just the head of the proverbial Leviathan. When I ran this program, I got the whole Leviathan — and if you consider Thomas Hobbes’s frontispiece, the figures who make up that large, ungainly body made me see antimetabole in all of its messy, manifold, overlapping variety. They made me see how antimetabole is (to quote the sententious Oscar Wilde) “rarely pure and never simple”; it often overlaps with other figures of speech.

Let’s look at ten slides of results. In these images, curly brackets denote the words that the program identified as units making up antimetabole: that is, the words being repeated. I’ve added some coloured text to highlight those units, and sometimes to identify units in other, overlapping figures.


This instance from Shakespeare’s Richard III overlaps with a figure called gradatio or climax (AB>BC>CD), which effectively ignores the repetitions of ‘several’ before ‘tongues’ and ‘tale’ that the machine is identifying. It’s not wrong, but here the gradatio is more dominant than the antimetabole, which feels incidental.

Here’s another, from John Lyly’s Gallatea. The gradatio is dominant; the antimetabole incidental.

How do you tell which figure is more or less dominant? I’m not sure, but in each of these cases at least one of the units (‘several’ and ‘have’, respectively) is a word that modifies another, or takes another as its object. So the hierarchy is perhaps grammatical.

Also, not how neither antimetabole has units with the same part of speech (noun, verb, or what you will); whereas the gradatios do: tongue, tale, and villain are all nouns; as are sea, fish, and wine. That seems significant, but how?

Textbook Cases

Let’s see if some of the more famous or textbook examples of antimetabole have consistent parts of speech.

This one doesn’t; it’s a memorable line from Shakespeare’s Richard II, and its pronoun-verb-noun/noun-verb-pronoun construction makes a satisfying epigrammatical sentence.

It’s also a three-part antimetabole: ‘I’ and ‘me’ are also units of repetition. (Because the program’s parameters are set to find only two-part antimetaboles, it identified this line three times, as shown.)

How about these two textbook cases? The first is from Thomas Kyd’s The Spanish Tragedy. The antimetabole’s inversion of ‘dangers’ and ‘pleasures’ is subsumed into a parallel figure called symploce, which repeats words at the beginnings (‘on’) and ends (‘ensue’) of successive clauses. I say it’s a parallel figure because unlike in the preceding two examples, I don’t feel like one overpowers the other. They are so intertwined that not a single word in the second line is original.

The second example is the most straightforward, uncomplicated antimetabole thus far. From Shakespeare’s Timon of Athens, it nicely rebounds the malice of others back onto themselves. It has a pronoun-verb/verb-pronoun structure.

Here’s two more from Shakespeare. The first, from The Two Gentlemen of Verona, is another three-part antimetabole; for simplicity I’ve selected just one. The second, from The Taming of the Shrew, is only a two-part because of the placement of ‘my.’ Its immediate and direct inversion of the two descriptors suggests that they are interchangeable, upholding the play’s misogyny.

Between Speakers

“But shall I live in hope? | All men I hope live so.” This example from earlier, in the conversation between Richard III and Lady Anne, provoked a question. Is there any difference between an antimetabole spoken by one, and spoken by two (or more)? Anne’s response to Richard makes these two lines an antimetabole only in conjunction. That’s normal between these speakers, particularly in this scene; and it’s normal for many of Shakespeare’s other speakers, who echo each other’s words to interrogate them.

Despite my hesitation I have to conclude that the only difference between antimetaboles spoken by one or two people is a difference of delivery or execution. If Shakespeare writes them as deliberate inversions, then it hardly matters whether he assigns them entirely or partially to individual characters.

Take these two examples, both from the same play (Love’s Labours Lost). The first is a character inverting the words of another; the second is a character echoing and then inverting the words of another. Is there a qualitative difference? In the second, Speaker B repeats Speaker A’s two units before inverting them (and adding a third in between), making the antimetabole; in the first, Speaker B’s inversion combines with Speaker A’s words to make the antimetabole. The delivery makes for a qualitative difference, but not a very significant one.

Finally, here’s a more mundane example, from Twelfth Night: a servant goes to the door to call in his lady’s gentlewoman. It’s not a clever inversion, but a dutifully direct repetition of his lady’s command, inverted only because of grammatical rules. Even if it’s less self-conscious or less deliberate than Richard and Anne’s, it’s an antimetabole along exactly the same lines.


That raises one last problem that will inform the next stage of my research. How do you treat those antimetaboles that just feel more accidental, less deliberate, than others? This one from Titus Andronicus has to fall into that category. Technically it meets the requirement of an AB|BA structure. While it’s subsumed within the epanalepsis (words repeated at the beginning and end of the sentence) of “Die, … die”, we’ve seen this before with symploce in The Spanish Tragedy; we know it doesn’t preclude antimetabole.

But “shame with … with thy shame”? Or “thy shame … shame thy”? Really? We’re not talking about “On dangers past, and pleasures to ensue” here. It’s more like “On pleasures past, and dangers to ensue”: the pleasure is gone, and the danger is that we’re getting distracted by conjunctions (‘with’) and pronouns (‘thy’).

If that sounds like a value judgement, it is. Thomas Kyd’s antimetabole is better than Shakespeare’s, here, because I prefer nouns to pronouns or conjunctions. That doesn’t make Shakespeare’s antimetabole less of an antimetabole; it’s just a bad antimetabole. If I wasn’t trained to distrust arguments about authorial intent, I would even claim that it was an inadvertent one.

One last unresolved question: is antimetabole limited to certain parts of speech? I can certainly imagine deliberate, even beautiful, antimetaboles that use pronouns or even conjunctions. But what about articles, like ‘a’ or ‘the’?

Gathering a long list of figures has helped me do three things.

  1. First, the machine works like an enormous butterfly net, collecting multiple rare and unknown specimens of beautiful language. Lots of them will overlap with other figures; sometimes they’ll subserve those figures; sometimes they’ll just overlap.
  2. Second, every specimen helps me expand and refine the definition of antimetabole: two or more units? excluding certain parts of speech? and so on.
  3. Third, and finally: the reason to use machines for evidence-gathering is because there’s a value to criticism that’s based on evidence beyond the limits of human readers. I don’t just mean the limits of time, or attention, or inclination; I mean the limits of arbitrariness, the limits of a critic’s limited experience and memory.

Were I to gather examples of antimetabole from Shakespeare’s plays, it would be from 36 at most, not 38 plays; I’ve never been able to finish Coriolanus, and I just can’t do The Merry Wives of Windsor. Even the title. Just, no. And the 30 featured EMED plays? I’ve read 13 of them, and I don’t plan on reading Love’s Cure, or The Martial Maid anytime soon.

So I’ll let the machine’s parameters, problematic as they may be, gather all of the examples that fit the standard definition. Even those with conjunctions, prepositions, or commingled articles and nouns. Even those that seem accidental, or overshadowed by other figures. They let me make arguments that are more varied — even more persuasive. Though you, gentle reader, will be the judge of that.

http://ullyot.ucalgaryblogs.ca/2017/10/21/chiasmus/feed/ 0 5422
Get with the Programming http://ullyot.ucalgaryblogs.ca/2017/10/16/programming/ http://ullyot.ucalgaryblogs.ca/2017/10/16/programming/#respond Mon, 16 Oct 2017 19:32:03 +0000 http://ullyot.ucalgaryblogs.ca/?p=5415 This week I’m away to the Pacific Northwest Renaissance Conference to deliver a paper on rhetorical figures in early modern drama. (Wait! Don’t stop reading, it gets better.) I feel like a legit digital humanist for the first time in […]]]>
(This continues my previous post on this research project, about my questions and initial steps.)

This week I’m away to the Pacific Northwest Renaissance Conference to deliver a paper on rhetorical figures in early modern drama. (Wait! Don’t stop reading, it gets better.) I feel like a legit digital humanist for the first time in my life, because I’ve written my own computer program to analyze texts – a bash script in Unix that you can try for yourself on Github.

Okay, so my program just prepares my text files to run a far more complex program by Marie Dubremetz at Uppsala University (chiasmusDetector), but getting it to run on my files took some work.

Marie’s program is written for Python 2.7, so I installed pyenv to switch between versions of the program. I also had to download Stanford’s coreNLP, which processes plain-text files to prep them for chiasmusDetector. Among other things, it encodes a lemma for every word: so chiasmusDetector can see that ‘drew’ is the past participle of “draw,” or “days” is the plural of “day,” and so on: and thus it will find repetitions like “{He} grieves {much} — | And me as {much} to see {his} misery,” from Shakespeare’s Two Noble Kinsmen.

After those installations, it was a matter of getting the files ready. First, I focused on dramatic texts – mostly because I needed a proving ground, and I could have well-edited texts of 69 plays from two projects: 38 Shakespeare plays from Folger Digital Texts; and 31 by his contemporaries from Early Modern English Drama. I needed only the words of those plays, not extraneous bits like character lists or speech prefixes (i.e. “To be or not to be, ” not “HAMLET: To be or not to be”). For those files, I’m indebted to Mike Poston and Meg Brown at the Folger Shakespeare Library.

Finally I was ready to run chiasmusDetector. I wrote a bash script with help (to put it mildly) from Kourosh Banaeianzadeh, who works for University of Calgary’s digital-humanities lab, LabNext, in the Taylor Family Digital Library. A bash script is just a series of Terminal commands (on my macOS High Sierra) that run in sequence.

If you read my script, you’ll see comments on every line; remember they were written by both a newbie programmer and a verbose English professor, who documents everything down to file movements and directory changes. Not exactly thrill-a-minute reading, but the results are worth it.

In my next post, I’ll describe some of those results, and my conclusions.

http://ullyot.ucalgaryblogs.ca/2017/10/16/programming/feed/ 0 5415
Find all the Figures http://ullyot.ucalgaryblogs.ca/2017/09/29/figures/ http://ullyot.ucalgaryblogs.ca/2017/09/29/figures/#respond Fri, 29 Sep 2017 16:13:45 +0000 http://ullyot.ucalgaryblogs.ca/?p=5374 What? “Ask not what your country can do for you.” Instead, ask what the next line is from President Kennedy’s 1961 inaugural address. Most will remember the second part of that familiar sentence: “but what you can do for your […]]]>


“Ask not what your country can do for you.” Instead, ask what the next line is from President Kennedy’s 1961 inaugural address. Most will remember the second part of that familiar sentence: “but what you can do for your country.” It’s memorable because it repeats three words and phrases from the first half, just in inverse order: “you,” “can do,” and “your country.”

The term for this kind of linguistic structure is a rhetorical figure, and the term for this kind of rhetorical figure is antimetabole: a symmetrical (ABC|CBA) arrangement of words and phrases.

Recognize one antimetabole and soon you’ll recognize them all. In advertising jingles: “I’m stuck on Band-Aid, and Band-Aid’s stuck on me.” In Newtonian science: “If you press a stone with your finger, the finger is also pressed by the stone.” And in Shakespeare: “Fair is foul, and foul is fair.” “Suit the action to the word, the word to the action.” There’s a pleasing formal symmetry to these phrases.

And their function echoes their form; by inverting words, they make you see things differently. Kennedy tells you to ask a public-spirited question, rather than a private-interest question. The ad company tells you to think about brand loyalty like you think about a bandage’s adhesive. Newton tells you that for every action there is an equal an opposite reaction.

And Shakespeare? His characters have all kinds of different motives, but most of them are also trying to make you see things differently. The witches in Macbeth invert descriptors to disorient Macbeth’s senses (I think). Hamlet advises actors to perform their words and actions within the bounds of decorum (maybe). It’s hard to interpret these lines out of their contexts.

But I want to take antimetaboles out of their contexts, in the first instance at least. I want to identify the figure’s formal features, not its contexts of use or function. My interest isn’t Newtonian science or 1960s public service or brand loyalty. My examples are arbitrary; the three non-Shakespearean ones are from a colleague’s unpublished paper (Randy Harris, “The Antimetabole Construction,” 2015) — because he’s so good at collecting and cataloguing them.

Arguments need examples, which is why I started with some. But a worthy argument reaches past its immediate contexts. It’s not parochial, inwardly focused, but has a sort of intellectual foreign policy. To put it more simply, you can apply its ideas to other contexts and problems where they’ll support new inquiries.

Contexts matter when we look at individual figures. But I want to understand antimetaboles in a more cross-contextual, longitudinal, universal way. How do writers like Shakespeare (and his contemporaries) use them? Do they repeat nouns, or other parts of speech? Do they ever repeat and invert more than three words (ABCD|DCBA, ABCDE|EDCBA, …)? How dissimilar can the words be? Must the words differ from each other (ABC|CBA)? Or can they repeat before they invert (ABA|ABA or ABB|BBA or AAB|BAA)? When does antimetabole ‘break’ into other figures?

To address these questions, I’ll have to gather a lot of examples from a lot of texts. I could do it by reading all of Shakespeare’s plays, but it would be faster and more reliable to use a computer to find them. So that’s what I’ll do.


UPDATE (2017-10-16): I’ve written a followup post on the outcomes of this work.

Okay, enough with the broad principles. I want to find every anadiplosis in Shakespeare’s plays, whether the figure has two words (AB|BA) or three (ABC|CBA) or more. Here are the three main steps:

  1. get Shakespeare’s texts (from Folger Digital Texts);
  2. parse them (using Stanford CoreNLP); and
  3. point Marie Dubremetz’s Chiasmus Detector program at them.

Step 1

The Folger Digital Texts’ API’s “Just the Text” function renders just the words spoken in Shakespeare’s plays; I want to remove speech-prefixes or stage directions, for example. Here’s “Just the Text” of Love’s Labour’s Lost, for instance. And here it is for Troilus and Cressida. Those are two of Shakespeare’s more rhetorically dense plays, full of inversions and wordplay.

Step 2

After I get these .txt files, I need to parse them. I use the Stanford parser to convert them from plain-text files to annotated .xml files with these features:

  • their words are tokenized, or split into separate words (e.g. isn’t = is + n[o]t), which matters for later stages of this process
  • those tokens are split into sentences
  • their parts-of-speech are labelled (e.g. is = a verb)
  • their lemmas are labelled (e.g. is = the third-person singular of “to be”)

I’m running this from the command line (in Terminal on my MacOS Sierra 10.12). First I run the Stanford CoreNLP, telling it which annotators I want (“tokenize,ssplit,pos,lemma”):

java -cp "*" -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma

To understand which annotators to use, I read the documentation.

Then I load the code, libraries, and model jars. Don’t ask me what exactly that means; I’m just following directions:

java -cp "/Users/ullyot/Sandbox/coreNLP/*" -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -file inputFile

Then, still following directions, I create a configuration (or Java Properties) file:

java -cp "*" -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -props ullyotProps.properties

Incidentally, that ullyotProps.properties file has three lines:

annotators = tokenize, ssplit, pos, lemma
outputExtension = .output
file = input.txt

In other words, unless I change these three arguments (like -file), they are what the parser will use.

Step 3

The third step is to run them through Marie Dubremetz’s chiasmus detector. Back in my main directory, I type this command:

python chiasmusDetector/MainS.py coreNLP/tro.txt.output coreNLP/output.xml

That’s where I’ve hit a wall, after I got some error messages because of the version of Python I’m running (3.5 instead of 2.7). I could use her web interface, which gives me some very good results by e-mail, but it’s less configurable.

But with a bit more tinkering and some advice from local and far-flung advisers and experts, I’m on my way to gathering all the evidence I could ever want. That’s when the real work of human interpretation will begin.

http://ullyot.ucalgaryblogs.ca/2017/09/29/figures/feed/ 0 5374
What (Might Have) Happened http://ullyot.ucalgaryblogs.ca/2017/09/13/clinton/ http://ullyot.ucalgaryblogs.ca/2017/09/13/clinton/#respond Thu, 14 Sep 2017 02:19:43 +0000 http://ullyot.ucalgaryblogs.ca/?p=5358 “It’s important that we understand what really happened. Because that’s the only way we can stop it from happening again.” Last year’s election of Donald Trump prompted me to write an open breakup letter to American political coverage. My resolve […]]]>

“It’s important that we understand what really happened. Because that’s the only way we can stop it from happening again.”

Last year’s election of Donald Trump prompted me to write an open breakup letter to American political coverage. My resolve has eroded steadily over the past 10 months, but it shattered today. This morning I started reading What Happened, Hillary Clinton’s 464-page memoir of the 2016 election, and I finished it in the afternoon. I couldn’t put it down – not because it’s another insider account of public events, but because it’s so much more than a private memoir. (Also, it was my birthday. And I’m on sabbatical. So my inbox could wait.)

What’s it really like to be Hillary Clinton? You’ll find some hints here, but her confession still feels guarded. Depite rumours to the contrary, Clinton isn’t running for future poltical office. So she (mostly) writes like someone who has nothing to lose from an honest, embittered, unflinching account of how she lost the Presidency to a manifestly unqualified man.

Trump and his campaign earn their share of blame, as do James Comey, Bernie Sanders, Vladimir Putin, the electoral college, sexism, misogyny, fake news, voter suppression, and the media’s obsession with the non-story of her private email server. (The New York Times is a particular target, as they mentioned in their review today.)

But Clinton also focuses unrelentingly on her own mistakes and flaws. She should have commiserated better with suffering people, instead of offering policy prescriptions. She should have simplified her message into bumper-sticker slogans (“Build the Wall!”). Sometimes it feels like false modesty, like in her recurring frustration that national campaigns aren’t the place for earnest, detailed, intelligent, nuanced, substantive policy disussions. Or in her regret that she couldn’t do anything to shake people’s entrenched ideas of her identity. Or that she couldn’t harness the cultural anxiety and economic malaise and be the ‘change’ rather than the ‘establishment’ candidate.

At times her laments feel like they’re about a broken system, not a flawed candidate. And they’re probably right. One of her most compelling chapters, “On Being a Woman in Politics,” is about double standards and ingrained suspicions of ambitious women in public life.

There are also moments here that feel out of place, like a lengthy disquisition on gun rights and public safety. Her chapter on the Russian hacking is so technically detailed (on algorithms, bots, and trolls) that you almost see her researcher’s notes; at least, it feels out of character for one who repeatedly claims to be more digital naivité than digital native.

Still, Clinton closes the memoir on a hopeful tone. Her prescription for future change involves (not surprisingly) more people committed to public service, more empathy for people unlike yourself, more love and kindness. She’s optimistic about seeing a woman President elected in her lifetime.

For those who anticipated Clinton’s acceptance speech on election night, it can be hard to read excerpts from that speech, and to learn how carefully her team planned every element of her Presidency’s first hundred days. It’s difficult to hear that from the most qualified candidate for that office in a generation, one who just happened to be a woman. The worst part of 2016 isn’t what happened, but what might have happened.

http://ullyot.ucalgaryblogs.ca/2017/09/13/clinton/feed/ 0 5358
The Locavore’s Dilemma http://ullyot.ucalgaryblogs.ca/2017/06/15/locavore/ http://ullyot.ucalgaryblogs.ca/2017/06/15/locavore/#respond Thu, 15 Jun 2017 18:35:29 +0000 http://ullyot.ucalgaryblogs.ca/?p=5230 Typically on this blog I write about research and teaching subjects. But it’s time now to rotate the proverbial crops and see what else will take root. What better way than to be un-metaphorical about it, and write about growing […]]]>

Typically on this blog I write about research and teaching subjects. But it’s time now to rotate the proverbial crops and see what else will take root. What better way than to be un-metaphorical about it, and write about growing my own food?

To quote the immortal ABBA, “Mother says I was a dancer before I could walk.” Not me — but like most, I was an eater before I could talk. And I never thought much about where my food came from, until I read books like Michael Pollan’s The Omnivore’s Dilemma (2006; reviewed here). Now that I’m a vegetarian, partly for environmental reasons, I’m trying my hand at backyard gardening to see if I can grow my own kale and tomatoes. Everybody’s doing it, from my favourite food blogger to PBS hosts to urban gardeners from Edmonton to Melbourne.

This spring, I moved to an old house with a sizeable backyard. A previous owner cultivated an inner-city orchard there, with seven apple trees. But now they’re gone, replaced by a grass lawn. The lawn: a standard-issue, all-American, high-maintenance, monocultural failure of the imagination.

Something had to change. I started (where else?) with a book: Tara Nolan’s Raised Bed Revolution, which covers the practical topics for an amateur carpenter and backyard farmer: how to build them, arrange them, and fill them. There’s also a section on hügelkultur. It sounds like an IKEA product line, but means the “centuries-old, sustainable” method of stacking mounds of decomposing wood and other compost beneath layers of soil. This decomposing biomass mimics conditions on a forest floor. The wood retains water while aerating and enriching the soil, slowly releasing its nutrients and microbes.

Hügelkultur mounds are typically long pyramid-shaped structures with plants growing up each side, though some gardeners use the technique in raised beds, what some call half-ass hügelkultur. That’s what I did, but my reasons were more pragmatic than a desire to reproduce ancient agricultural methods.

Here’s what I mean:

The project started with these stacking corners from Lee Valley, which let me build beds of any dimension and height. First, I cut cedar 2x4s to the length and width of the available plot. I added the corners to make a basic rectangular box with hinges on the corners, and made it square. Then you can stack the boxes as high as you like; I opted for three levels, each about 8 inches deep, to make a raised bed about two feet deep. I lined it with weed-preventing cloth in the bottom so nothing would grow through from below.

Then I had a problem: I didn’t have nearly enough topsoil and compost and peat to fill a box 10 feet long and ~3 feet wide, to a depth of ~2 feet. But I’d recently felled a dead tree and needed to dispose of the debris. So I improvised a raised-bed version of the hügelkultur technique.

Here’s the empty cedar box, with the first layer of debris covering the weed-proof cloth.

I added more layers of pine, lilac, and anything else too big for my compost bin.

After adding the big logs, I tossed in some more cuttings from an overgrown lilac bush.

That nearly filled in the whole box, but there was a lot of air amid all those logs and branches that would all get compressed.

The next step was to cover the wood with a layer of dense material to support the topmost layer of soil and compost. So I dug up a corner of the lawn and layered the turf (grass side down) atop all the wood.

Then I adding alternating layers of loam soil and compost, using a combination of sheep manure and my own bin’s decomposed food scraps and other material.

It took about 30 loads from the wheelbarrow to fill the whole box.

I gave it a few days to settle and absorb some rainwater before staking out the rows for seedlings and installing this metal pyramid for runner beans.

The back row I planted with alternating tomato plants and bee-friendly flowers like sweet William and purple salvia.

Then I planted a middle row of herbs like coriander and basil, and a front row with seeds: parsnips, carrots, kale, chard, spinach, and zucchini.

Now I’m trying to resist the urge to check on the seedlings every day. What next to do, but wait like an anxious parent? Among other things, like feeding the soil with new compost as it settles around the edges; keeping the soil moist but not waterlogged; and building a second box for the front yard. Because there’s more grass to conquer.

http://ullyot.ucalgaryblogs.ca/2017/06/15/locavore/feed/ 0 5230
What can Machine Learning do for Literary Critics? http://ullyot.ucalgaryblogs.ca/2017/04/10/what-can-machine-learning-do-for-literary-critics/ http://ullyot.ucalgaryblogs.ca/2017/04/10/what-can-machine-learning-do-for-literary-critics/#respond Mon, 10 Apr 2017 15:26:27 +0000 http://ullyot.ucalgaryblogs.ca/?p=4855 Can you trust machines to make decisions on your behalf? You’re doing it already, when you trust the results of a search engine or follow directions on your phone or read news on social media that confirms your worldview. It’s […]]]>
First in a series of posts about artificial intelligence sparked by “The Great AI Awakening,” an article from December 2016 by Gideon Lewis-Kraus in the New York Times Magazine. Cross-posted to The Augmented Criticism Lab‘s blog.

Can you trust machines to make decisions on your behalf? You’re doing it already, when you trust the results of a search engine or follow directions on your phone or read news on social media that confirms your worldview. It’s so natural that you forget it’s artificial; someone programmed a machine to make it happen. If Arthur C. Clarke is right (“any sufficiently advanced technology is indistinguishable from magic”), we’re living in the age of magical thinking.

We’re delegating ever more decisions to algorithms, from a matchmaker identifying your soulmate to a car killing either you or those pedestrians. Our thinking will only get more magical when our machines learn to make better decisions, in more parts of our lives.

Gideon Lewis-Kraus makes these invisible processes more visible. He tells the story of major improvements of the Google Translate algorithm, which can easily make distracting errors if it’s overly literal. His example is “minister of agriculture” rendered as “priest of farming” — a phrase that native speakers find strange, but that a highly literal translating machine might not. The Google Brain team built what Lewis-Kraus calls a “deep learning software system” to imitate the way that neural networks make reliable decisions. (The third section of his article, “A Deep Explanation of Deep Learning,” is an accessible introduction about recognizing images of cats.)

So what’s all that got to do with being a literary critic? First, my method is to use machines in just the first stage of criticism, the gathering of examples to make into arguments. If I want to find all the lines in which Shakespeare discusses the weather, I start by defining ‘weather’ in terms a machine can understand, and then point that machine at the Complete Works of Shakespeare and say ‘go find me lines about this.’ Once it’s finished I take over, reading each line to see if it’s (1) actually about the weather — and if not, tweaking my definition and repeating the process – and (2) interesting enough to work into an argument about Shakespeare and — I don’t know, melancholy clouds or something.

What I do is called augmented criticism, and you can read all about it here, and also here, if you like. My collaborator Adam J. Bradley has designed algorithms to give us vast numbers of rhetorical figures, which are sometimes called figures of speech. (This Wikipedia article has an impressive list, but the definitive resource is Gideon O. Burton’s Silva Rhetoricæ.) Bradley and I have been able to make comprehensive arguments about figures like gradatio in early modern drama — way more comprehensive, that is, than we could make without the machine gathering examples of this figure for us. (And you can read all about those arguments in our most recent paper, cited are at the end of this article.)

Herding Tigers

Lewis-Kraus’s description of Google’s neural network got me thinking about expanding our approach. It began with his distinction between rule-based and data-based machine learning. Evidently there’s been a historical debate between creationism and evolutionary theory, and it’s not the one you’re thinking of.

When you give a machine a problem, whether it’s finding cats in YouTube videos or finding rhetorical figures in early modern drama, you have two options. The first is the creationist, rule-based approach: you give your machine all the rules of cat-finding (pointy ears, four legs, whiskers, … ) and set it loose on the data. The advantage is that you’ll find every cat that conforms to your definition of a cat, and you’ll find them quickly and reliably; few koalas or kangaroos will meet that definition.

The second option is the evolutionary, data-based approach: you start with the data, and see what the machine makes of it. It’ll find plenty of koalas and kangaroos, but here’s the important part: each time it does, you manually correct it and run the process again. Next time, it remembers that some of its components are more reliable at identifying cats, so the ‘votes’ they cast for cat over kangaroo are weighted more heavily. “[I]t functions, in a way, like a sort of giant machine democracy” in which “each individual unit can contribute differently to different desired outcomes.” (Read the article for more details, like how this mimics the brain’s neural network.)

There are advantages to the second approach, the first being that it just makes intuitive sense. “Humans don’t learn to understand language by memorizing dictionaries and grammar books,” Lewis-Kraus asks, “so why should we possibly expect our computers to do so?” The disadvantage is that it’s really slow, especially at first when you constantly have to supervise its learning (or correct its errors). But it’s far more flexible, less brittle, than rule-based systems.

The Zoo and the Wild

How so? Consider the problem of identifying bigger cats, out in the wild. You could start by looking at every tiger in the zoo and then going into the wild to look for every four-legged whiskered carnivore who looks and acts like the ones we know, whom we can label as ‘tiger.’ That works well enough — but it assumes that every tiger in the wild will resemble the ones that are back in the zoo. Usually that’s a safe assumption, but how do we know we’re not missing new tiger subspecies?

Rhetoric is a problem of identifying all the cats in the wild, not just admiring the ones in captivity. It’s about natural habitats, not dioramas — because rhetoric is persuasive and dynamic and fluid. Sure, it has a few rules and conventions (a lot, actually) — but its real purpose is to impress people with its beauty and overpower them with its arguments. To understand the breadth of those arguments, and the variety of that beauty, we ought to look at all the four-legged whiskered carnivores, whether or not they resemble the ones in the zoo.

The Augmented Criticism Lab has designed our algorithms on the rule-based model rather than the data-based model. We developed a formula (or rule) for figures like gradatio (… A, A … B, B …), based on every example we could find, and then we asked the computer to identify patterns that exactly fit that formula. And we get great results when the rhetorical figures in the wild look just like the ones that we’ve seen before, in captivity.

But our next step is to add a more data-based, evolutionary approach to our toolbox. It’ll start slowly, with lots of labelled examples of data, and training sets for our neural network to examine — like showing a child picture-books of tigers, before she graduates to more grown-up zoological textbooks of tiger subspecies.

And where will it go from there? We’ll gather evidence that more rhetorical figures exist than we knew, and that they interact with each other in ways and over genres and texts that we never expected. Not only will we find all the lions and lynxes and panthers and all the other catlike creatures, but also track down every last tiger and every subspecies, the Sumatran and the Malayan and the Amur. The outcome will be a better understanding of where figures live and how they interact together in the textual wilderness.

Further Reading

Bradley, Adam J. and Michael Ullyot, “Past Texts, Present Tools, and Future Critics: Toward Rhetorical Schematics.” In Shakespeare’s Language in Digital Media: Old Words, New Tools. Ed. Jennifer Roberts-Smith, Janelle Jenstad, and Mark Kaethler. London: Routledge, 2017.


http://ullyot.ucalgaryblogs.ca/2017/04/10/what-can-machine-learning-do-for-literary-critics/feed/ 0 4855