There’s a new piece by Jeffrey Williams in the Chronicle on surface reading and “the new modesty” in literary studies. Came to my attention via Ted Underwood, who had a kind of ambivalent response to it on Twitter.
I was going to reply there, but 140 characters weren’t quite enough, and I’m asked about this pretty often, so thought I’d set down my short thoughts in a more permanent way.
I like and respect Marcus and Best’s work, which I find subtle and illuminating, though most of it falls somewhat outside my own field. And I guess I understand why some people are fed up with ideologically committed, theoretically oriented, hermeneutically inflected literary scholarship. When that stuff is bad, it’s pretty bad. Then again, just about anything can be (and often is) bad. I don’t see any special monopoly on badness there.
I also understand how it’s possible to look at (some) digital humanities research and think that it shares some sort of imagined turn away from depth and detail in favor of “direct” observation of “obvious” features. People who have no experience with the sciences tend to imagine that such things exist and that they’re different from what literary people work with. They aren’t, though that’s an argument for another time. (I have a little on it in passing in my forthcoming Comparative Literature review, FWIW.) In any case, it’s true that you sometimes hear people talking about a desire for “empirical” or “descriptive” research in DH, though they’re in the minority and I’m not one of them.
It’s hopeless, of course, to try to tell other people how to frame their work or ultimately to control how people receive your own. But I’ll say that my own reasons for pursuing computational literary research have nothing to do with (naïve, illusory) empiricism or a desire for critical modesty or a disenchantment with symptomatic, culturally committed criticism. Quite the opposite. Computers help me marshall evidence for large-scale cultural claims. That’s why I’m interested in them: they help me do better the kind of big, not especially modest, fundamentally symptomatic and suspicious critical work that brought me to the field in the first place.
But then, I would say that. I was Fred Jameson’s student and I was his student for a reason.
I think it’s going to take awhile for things to shake out. I’m all in favor of empiricism and description (a term I prefer to ‘surface reading’). But it’s a complicated world and you can do interesting things with ’empirical’ findings. I had a lot of fun last summer using the online charts from Matt Jockers’ Macroanalysis as evidence bearing on a large-scale argument Leslie Fiedler made years ago in Love and Death in the American Novel. Fiedler’s argument, of course, was based on his reading of a handful of canonical texts. Jockers gave me the means to run Fiedler’s argument up against 3300 texts. In effect, I undertook a ‘close’ reading of Jockers’ ‘distant’ reading. What I was doing is pretty much where you just ended up, investigating the use of the topic model in arguing about “large-scale cultural claims.”
Very well said. I am drawn in my own work, as a folklorist, to computational/algorithmic approaches because it helps me pursue certain facets of formalism/structuralism/post-structuralism that have always fascinated me but I didn’t know how to get to. Having recently completed a book that is historical in nature, I found myself returning to my, apparently, abiding interest in human cognition. I’m not convinced that computational models parallel cognitive models, but I do find that the results from such efforts, some of which are realized in computation but some of which are realized at the boundaries or failings of computation, are really rewarding.
At the very least, what I enjoy about my work in computation is failing.