Find all the Figures
“Ask not what your country can do for you.” Instead, ask what the next line is from President Kennedy’s 1961 inaugural address. Most will remember the second part of that familiar sentence: “but what you can do for your country.” It’s memorable because it repeats three words and phrases from the first half, just in inverse order: “you,” “can do,” and “your country.”
The term for this kind of linguistic structure is a rhetorical figure, and the term for this kind of rhetorical figure is antimetabole: a symmetrical (ABC|CBA) arrangement of words and phrases.
Recognize one antimetabole and soon you’ll recognize them all. In advertising jingles: “I’m stuck on Band-Aid, and Band-Aid’s stuck on me.” In Newtonian science: “If you press a stone with your finger, the finger is also pressed by the stone.” And in Shakespeare: “Fair is foul, and foul is fair.” “Suit the action to the word, the word to the action.” There’s a pleasing formal symmetry to these phrases.
And their function echoes their form; by inverting words, they make you see things differently. Kennedy tells you to ask a public-spirited question, rather than a private-interest question. The ad company tells you to think about brand loyalty like you think about a bandage’s adhesive. Newton tells you that for every action there is an equal an opposite reaction.
And Shakespeare? His characters have all kinds of different motives, but most of them are also trying to make you see things differently. The witches in Macbeth invert descriptors to disorient Macbeth’s senses (I think). Hamlet advises actors to perform their words and actions within the bounds of decorum (maybe). It’s hard to interpret these lines out of their contexts.
But I want to take antimetaboles out of their contexts, in the first instance at least. I want to identify the figure’s formal features, not its contexts of use or function. My interest isn’t Newtonian science or 1960s public service or brand loyalty. My examples are arbitrary; the three non-Shakespearean ones are from a colleague’s unpublished paper (Randy Harris, “The Antimetabole Construction,” 2015) — because he’s so good at collecting and cataloguing them.
Arguments need examples, which is why I started with some. But a worthy argument reaches past its immediate contexts. It’s not parochial, inwardly focused, but has a sort of intellectual foreign policy. To put it more simply, you can apply its ideas to other contexts and problems where they’ll support new inquiries.
Contexts matter when we look at individual figures. But I want to understand antimetaboles in a more cross-contextual, longitudinal, universal way. How do writers like Shakespeare (and his contemporaries) use them? Do they repeat nouns, or other parts of speech? Do they ever repeat and invert more than three words (ABCD|DCBA, ABCDE|EDCBA, …)? How dissimilar can the words be? Must the words differ from each other (ABC|CBA)? Or can they repeat before they invert (ABA|ABA or ABB|BBA or AAB|BAA)? When does antimetabole ‘break’ into other figures?
To address these questions, I’ll have to gather a lot of examples from a lot of texts. I could do it by reading all of Shakespeare’s plays, but it would be faster and more reliable to use a computer to find them. So that’s what I’ll do.
Okay, enough with the broad principles. I want to find every anadiplosis in Shakespeare’s plays, whether the figure has two words (AB|BA) or three (ABC|CBA) or more. Here are the three main steps:
- get Shakespeare’s texts (from Folger Digital Texts);
- parse them (using Stanford CoreNLP); and
- point Marie Dubremetz’s Chiasmus Detector program at them.
The Folger Digital Texts’ API’s “Just the Text” function renders just the words spoken in Shakespeare’s plays; I want to remove speech-prefixes or stage directions, for example. Here’s “Just the Text” of Love’s Labour’s Lost, for instance. And here it is for Troilus and Cressida. Those are two of Shakespeare’s more rhetorically dense plays, full of inversions and wordplay.
After I get these .txt files, I need to parse them. I use the Stanford parser to convert them from plain-text files to annotated .xml files with these features:
- their words are tokenized, or split into separate words (e.g. isn’t = is + n[o]t), which matters for later stages of this process
- those tokens are split into sentences
- their parts-of-speech are labelled (e.g. is = a verb)
- their lemmas are labelled (e.g. is = the third-person singular of “to be”)
I’m running this from the command line (in Terminal on my MacOS Sierra 10.12). First I run the Stanford CoreNLP, telling it which annotators I want (“tokenize,ssplit,pos,lemma”):
java -cp "*" -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma
To understand which annotators to use, I read the documentation.
Then I load the code, libraries, and model jars. Don’t ask me what exactly that means; I’m just following directions:
java -cp "/Users/ullyot/Sandbox/coreNLP/*" -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -file inputFile
Then, still following directions, I create a configuration (or Java Properties) file:
java -cp "*" -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -props ullyotProps.properties
Incidentally, that ullyotProps.properties file has three lines:
annotators = tokenize, ssplit, pos, lemma outputExtension = .output file = input.txt
In other words, unless I change these three arguments (like -file), they are what the parser will use.
The third step is to run them through Marie Dubremetz’s chiasmus detector. Back in my main directory, I type this command:
python chiasmusDetector/MainS.py coreNLP/tro.txt.output coreNLP/output.xml
That’s where I’ve hit a wall, after I got some error messages because of the version of Python I’m running (3.5 instead of 2.7). I could use her web interface, which gives me some very good results by e-mail, but it’s less configurable.
But with a bit more tinkering and some advice from local and far-flung advisers and experts, I’m on my way to gathering all the evidence I could ever want. That’s when the real work of human interpretation will begin.