Density of Locations in U.S. Fiction around the Civil War

I’ve been working recently on different visualizations of the geolocation information I’ve discussed on a couple of previous occasions. (See posts on the corpus, on method and accuracy, and on an earlier style of mapping.)

Here’s the latest: Below are Google Fusion Tables intensity maps of the distribution of named places in my corpus (1098 volumes of U.S. fiction dating from 1851-75; good but not final data, so don’t get carried away just yet), aggregated by nation and by U.S. state.

Countries Linear

Named locations aggregated by nation, linear density scale.
(WordPress.com doesn’t allow embedded iframes; click on this (or any) map to see the live version, which includes raw counts per territory on mouseover.)

This first figure mostly shows that the large majority of named places in books written around the Civil War are located in the United States. But (a.) there’s a fair amount of international distribution and (b.) there’s more variation in that international distribution than the shading here reveals. (FWIW, the distribution looks power-law-like, but I haven’t checked yet.)

For better comparative resolution, we can use log-scaled density shading. Note that this of course flattens the difference between high and low densities, which is why I’ve included both figures.

Countries Log

Named locations aggregated by nation, log density scale.
Click for live version.

The log scale brings out a bit better the comparatively high concentrations of named places in western Europe, the Middle East, Russia (who knew?), China, India, Canada, Mexico, Brazil, and Australia. (If I’m remembering right, Greenland is all Melville. But don’t quote me on that.)

What about the distribution within the United States? Ask and ye shall receive:

States Linear

Named locations aggregated by state, linear density scale.
Click for live version.

New York, Virginia, and Massachusetts stand out; PA, CA, TX, and LA also have pretty decent numbers. A lot of flattening in this visualization, though, so …

The log version:

States Log

Named locations aggregated by state, log density scale.
Click for live version.

Interesting how this shows more clearly the notable density in the south and midwest.

More to come, especially time-resolved series (which should be really useful) and city/POI-level maps.

Two notes in passing:

1. Fusion Tables (the tool) and fusion tables (the output) are really cool. They’re dead simple; the charts here took about 15 minutes to create once I’d dumped the relevant data from MySQL. Great for testing and prototyping. But there are limits on what they can do and they’re not terribly flexible outside the things they’re built to do. I had to generate the log counts in Excel, for instance, because you can’t perform computations on aggregated data. (The aggregation itself was totally painless, though, as was the export-import.)

2. I’ll probably need a different package for the city-level mapping, because fusion tables intensity maps will only show 250 data points at a time. Even in my reduced and cleaned data, I have about 1700 unique locations. Also thinking about exactly how to represent both number of instances (marker size, I think) and time-evolution (maybe something like the Outbreak-style Walmart map from FlowingData, though I’d like for my sanity to avoid Flash.)

[Update: It would obviously also be interesting to compare these densities–and their evolution over time–to census data from the period. This is In The Works.]

Toponym Resolution Accuracy

I just finished a study on the accuracy of automated location identification in nineteenth-century literary texts using the Stanford NLP package (for named entity extraction) and Google’s geocoding API (for associating location names with lat/lon and other GIS data). The full results will go in the article I’m currently writing, but here’s a quick preview of this piece.

Out of the box, the combination of Stanford NER + Google has precision of about 0.40 and recall of 0.73 on my data (U.S. novels published between 1851 and 1875). Precision is the fraction of identified places that are correct; recall is the fraction of actual places in the source text that are identified correctly. You could get great recall—and terrible precision—by identifying everything in the source text as a location; likewise you’d have terrific precision—but awful recall—by limiting the locations you identify to those that are easy and unambiguous, e.g., “Boston.” You can combine (well, take the harmonic mean of) precision and recall to get an overall sense of accuracy via an F measure; in this case F1 (which weighs P and R equally) is 0.52.

What those numbers mean is that the method succeeds in finding most of the named places, but it also finds a lot of other extraneous stuff that it thinks are places but really aren’t. Fortunately, many of its errors aren’t of the kind you might expect. For instance, the location of “Springfield” in a text is hard to resolve without more information. There are some of these ambiguity problems, of course, but many more come from text strings that ought not to have been identified as locations at all. Some of these are more or less ambiguous (“Charlotte” or “Providence,” for instance, both of which show up pretty often in nineteenth-century texts, almost always as a personal name and divine care, respectively). But many such false locations are (even more) straightforward: “New Jerusalem,” “Conrad,” “Caroline,” etc. (I saw something similar in my previous work with GeoDict.)

Because these sorts of errors are pretty easily identified out of context, it’s not terribly hard to clean up (quickly!) the results by hand, striking recognized locations that likely aren’t used as real places. At the same time, there are a few commonly-used pseudo-places that the NER package finds but Google doesn’t identify (“the South,” “Far East,” and so on). These are trivial to correct.

Applying such hand cleanup raises precision to 0.59 and recall to 0.84 (the latter mostly due to “South,” “North,” etc.—we’re talking about the lit of the Civil War, after all). The revised F1 score is 0.69. That’s not bad, really (though one would always like these numbers to be higher). Compare, for instance, Jochen Leidner’s evaluation of toponym resolution methods, which found lower numbers using more sophisticated techniques on locations mentioned in newspaper articles. Note in particular that even humans often don’t agree on what constitutes a named location (“Boston lawyer”: adjective or place?) nor on the identity of the referent (Leidner cites inter-annotator agreement of roughly 80-90% depending on the corpus).

So long story short: the combination of Stanford NER and Google geolocation performs (surprisingly?) well by contemporary standards. But keep in mind that even in the best case, around 40% of the identified results will be spurious.

Bowker Publishing Stats for 2010

I overlooked last month’s announcement from Bowker concerning the number of books published in 2009 and 2010. Condensed version: fiction is flat at a little under 50,000 new titles, literature dropped off a lot (~30%, to 8k from 11k), though if memory serves, “literature” is a catch-all for anthologies and books about literature; all novels fall under fiction, even when they’re categorized as “literary fiction.” Poetry and drama were off, too.

But—and this may explain much of the drop/flatness—“non-traditional” publication was way, way up. Like, into the millions up. Bowker reports about 316k new traditional titles across all categories for 2010, against almost 2.8 million non-traditional (mostly POD reprints of public domain works). Until c. 2006, the ratios were reversed at about 10:1 traditional:non-traditional. My guess would be that there’s also, buried in that landslide of reprints, a small but very non-trivial number of books that might in the past have been published traditionally, but now are sold direct via Amazon and author sites without the intervention of a regular publisher (note the presence of significant numbers from Lulu, AuthorHouse, XLibris, etc.).

Take-away point: There’s a lot of new fiction out there. I’ll assume most of it is awful, but then most of it has always been awful. It’s only that the sea of words is a lot bigger now.

Literary Production around the Civil War

One more histogram, possibly of general interest. Below is a plot showing the number of literary titles by American authors published in the U.S. each year between 1850 and 1875 (via Lyle Wright’s 1957 bibliography as represented in Indiana’s holdings, black bars) along with the number of those titles held in fully edited form in the Wright American Fiction archive from Indiana and in the MONK project.

Note that this isn’t a stacked bar plot; you’re seeing three distinct histograms superimposed on one another. So if you’re looking just at the black bars, you’re seeing a comprehensive survey of American literary production around the Civil War.

Wright Dates Combo

Publication of literary texts drops off in the run-up to the Civil War and in its early years, then bounces back pretty quickly, even before the war is over. There are about 100 new books each year on average through the period.

Two notes for my own purposes. (1.) IU’s coverage of fully edited texts is around 40% of the total period output. That’s pretty good. Just as importantly, it hits that level roughly evenly for each year. No need to worry about serious variations from year to year or about individual years with very low representation (though be careful with, e.g., 1860–61). and (2.) I like what MONK did with its 300-text subset, clustering texts as far on either side of the war as possible. Even if you were only working with MONK, you’d still have a decent chance of picking out ante-/post-bellum features.

More Wright American Fiction

With the kind assistance of several folks at Indiana, I’ve now gotten my hands on IU’s full holdings of the digitized Wright American Fiction collection. This is the literary corpus spanning 1850-1875 from which the MONK texts that I used for my initial mapping project were drawn. But MONK chose to limit the size of their Wright-based corpus to around 300 volumes for reasons of balance across their several datasets.

IU has an additional c. 900 Wright texts that have been fully edited and XML encoded (plus 1300 more that have been OCR’ed and XML encoded but not hand edited). This means my depth and temporal coverage in the period around the Civil War just got way better.

More info and results to come as I work my way through this stuff. In the meantime, here’s a plot of the temporal distribution by original publication date of the texts in the two corpora:

Distribution of 954 Wright titles with known publication dates in Indiana's holdings

Distribution of 297 Wright titles with known publication dates in MONK

Post45

I have a new piece, “Contemporary Fiction by the Numbers,” in the inaugural batch of essays at Post45 Contemporaries. My article is a primer on quantitative methods for literary studies, along with a brief for their significance. Not much that I haven’t said before, but it pulls together a few DH-and-lit ideas and a set of examples in one place.

More important, though, is the existence of Post45. Post45 is a bunch of things: an Americanist working group, a book series (with Stanford UP), a conference, an online journal (which will soon begin publishing regular peer-reviewed articles), and—through its Contemporaries section, edited by Andy Hoberek—a cross between the Partisan Review, NYRB, and an especially smart blog devoted to “actively intervening in current tastes.”

I’m really happy to have an essay in the launch edition of the site, but I’m even happier that the whole project exists.

Maps of American Fiction

A quick post to show some recent research on named places in nineteenth-century American fiction. I’m interested in the range and distribution of places mentioned in these books as potential indicators of cultural investments in, for example, internationalism and regionalism. I’m also curious about the extent to which large-scale changes (both cultural and formal) are observable in the overall literary production of this (or any) period. The mapping work I’ve done so far doesn’t come close to answering those questions, but it’s part of the larger inquiry.

The Maps

The maps below were generated using a modest corpus of American novels (about 300 in total) drawn from the Wright American Fiction Project at Indiana by way of the MONK project. They show the named locations used in those books; points correspond to everything from small towns through regions, nations and continents. Methodological details and (significant) caveats follow.

1851
1851. 37 volumes (~2.5M words), with data cleanup.

1852
1852. 44 volumes (~3.0M words), minimal cleanup.

1874
1874. 38 volumes (~3.1M words), minimal cleanup.

The Method

Texts were taken from MONK in XML (TEI-A) format with hand-curated metadata. Location names were identified and extracted using Pete Warden’s simple gazetteering script GeoDict, backed by MaxMind’s free world cities database. [Note that there’s currently a bug in the database population script for Geodict. Pete tells me it’ll be fixed in the next release of his general-purpose Data Science Toolkit, into which Geodict has now been folded. But for now, you probably don’t want to use Geodict as-is for your own work.] I tweaked GeoDict to identify places more liberally than usual, which results (predictably) in fewer missed places but more false positives. The locations for 1851 were reviewed pretty carefully by hand; I haven’t done the same yet for the other years. Maps were generated in Flash using Modest Maps with code cribbed shamelessly from the awesome FlowingData Walmart project. This means that it should be relatively easy to turn the static maps above into a time-animated series, but I haven’t done that yet.

Discussion

As I pointed out in my talk on canons, the international scope and regional clustering of places in 1851 strike me as interesting. See the talk for (slightly) more discussion. Moving forward to 1874—and bearing in mind that we’re looking at dirty data best compared with the similarly dirty 1852—the density of named places in the American west increases after the Civil War and it looks as though a distinct cluster of places in the south central U.S is beginning to emerge.

The changes form 1852 to 1874 are (1) intriguing, (2) but also mostly as expected, and (3) more limited in scope than one might have imagined, given that they sit a decade on either side of the periodizing event of American history. I think an important question raised by a lot of work in corpus analysis (the present research included) concerns exactly what constitutes a “major” shift in form or content.

I’m going to avoid saying anything more here because I don’t want to build too much argument on top of a dataset that I know is still full of errors, but I wanted to put the maps up for anyone to puzzle through. If you have thoughts about what’s going on here, I’d love to hear them.

Caveats

A couple of notes and caveats on errors:

  • Errors in the data are of several kinds. There are missed locations, i.e., named places that occur in the underlying text but are not flagged as such. Some places that existed in the nineteenth century don’t exist now. Some colloquial names aren’t in the database. And of course a book can be set in, say, New York City and yet fail to use the city’s name often or at all, possibly preferring street addresses or localisms like “the Village.” Also, GeoDict as configured identifies all country and continent names with no restrictions, but requires cities and regions (e.g., U.S. states) either to be paired with a larger geographic region (“Brooklyn, New York,” not “Brooklyn”) or preceded by “in” or “at” as indicators of place. You pretty much have to do this to keep the false positive rate manageable.
  • But there are still false positives. There’s a city somewhere in the world named for just about any common English name, adjective, military rank, etc. “George,” for instance, is a city in South Africa. “George, South Africa,” if it ever occurred in a text, would be identified correctly. But “In George she had found a true friend” produces a false positive. When I clean the data, I eliminate almost all proper names of this kind and investigate anything else that looks suspicious. Note that the cluster of places in southern Africa visible in the (uncleaned) 1852 and 1874 maps is almost certainly attributable to this kind of error. Travis Brown tells me he’s seen the same thing in his own geocoding experiments.
  • Then there are ambiguous locations, usually clear in context but not obvious to GeoDict. “Cambridge” is the most frequent example. Some study suggests that most American novels in the corpus mean the city in Massachusetts, but that’s surely not true of every instance. Most other ambiguities are much more easily resolved, but they still require human attention.

Job News II

I’m very happy to say that I’ll join the English faculty at Notre Dame in the fall. The position (in American fiction after 1900) is great, the people are terrific, the university is lovely. I couldn’t be happier and I’m tremendously excited to get started in my new home.

In the meantime, I’m particularly grateful for my colleagues in American Culture Studies at Wash U, whom I will be leaving sooner than planned. My time in St. Louis has been wonderful: stimulating, friendly, generous of attention and resources — everything a scholar and a person could want. I’m sorry to leave, but happy I’ll only be moving a few hours up the road. (OK, six and a half, but who’s counting?)

As I said the last time around, nothing much should change here on the blog. I’ll post new contact info once I have it, but that won’t happen until August. In the meantime, I’ll be in St. Louis through the end of the semester and into the summer.

Oh, and those maps of named places in American fiction are coming shortly …

Some Thoughts on DH and Canons

Below is a draft of the talk I’m giving next week at Austin for the first of three DH symposia this semester sponsored by the Texas Institute for Literary and Textual Studies. The theme of this first meeting is “Access, Authority, and Identity“; my paper is an attempt to think through some of the implications of working beyond the canon (however construed) for straight literary and cultural scholarship and for DH alike. It’s also a nice excuse to show a little preview of the geolocation work I’ve been doing recently.

A prettier PDF version is also available.

Undermining Canons

I have a point from which to start: Canons exist, and we should do something about them.

I wouldn’t have thought this was a dicey claim until I was scolded recently by a senior colleague who told me that I was thirty years out of date for making it. The idea being that we’d had this fight a generation ago, and the canon had lost. But I was right and he, I’m sorry to say, was wrong. Ask any grad student reading for her comps or English professor who might confess to having skipped Hamlet. As I say, canons exist. Not, perhaps, in the Arnoldian–Bloomian sense of the canon, a single list of great books, and in any case certainly not the same list of dead white male authors that once defined the field. But in the more pluralist sense? Of books one really needs to have read to take part in the discipline? And of books many of us teach in common to our own students? Certainly. These are canons. They exist.

So why, a few decades after the question of canonicity as such was in any way current, do we still have these things? If we all agree that canons are bad, why haven’t we done away with them? Why do we merely tinker around the edges, adding a Morrison here and subtracting a Dryden there? Is this a problem? If so, what are we going to do about it? And more to the immediate point, what does any of this have to do with digital humanities?

The answer to the first question—“Why do we still have canons?”—is as simple to articulate as it is apparently difficult to solve. We don’t read any faster than we ever did, even as the quantity of text produced grows larger by the year. If we need to read books in order to extract information from them and if we need to have read things in common in order to talk about them, we’re going to spend most of our time dealing with a relatively small set of texts. The composition of that set will change over time, but it will never get any bigger. This is a canon. [Footnote: How many canons are there? The answer depends on how many people need to have read a given set of materials in order to constitute a field of study. This was once more or less everyone, but then the field was also very small when that was true. My best guess is that the number is at least a hundred or more at the very lowest end—and an order of magnitude or two more than that at the high end—which would give us a few dozen subfields in English, give or take. That strikes me as roughly accurate.]

Another way of putting this would be to say that we need to decide what to ignore. And the answer with which we’ve contented ourselves for generations is: “Pretty much everything ever written.” We don’t read much. What little we do read is deeply nonrepresentative of the full field of literary and cultural production. Our canons are assembled haphazardly, with a deep set of ingrained cultural biases that are largely invisible to us, and in ignorance of their alternatives. We’re doing little better, frankly, than we were with the dead-white-male bunch fifty or a hundred years ago, and we’re just as smug in our false sense of intellectual scope.

So canons, even in their current, mildly multiculturalist form, are an enormous problem, one that follows from our single working method, that is, from the need to perform always and only close reading as a means of cultural analysis. It’s probably clear where I’m going with this, at least to a group of DH folks. We need to do less close reading and more of anything and everything else that might help us extract information from and about texts as indicators of larger cultural issues. That includes bibliometrics and book historical work, data-mining and quantitative text analysis, economic study of the book trade and of other cultural industries, geospatial analysis, and so on. Moretti is an obvious model here, as is the work of people like Michael Witmore on early modern drama and Nicholas Dames on social structures in nineteenth-century fiction.

To show you one quick example of what I have in mind, here’s a map of the locations mentioned in thirty-seven American literary texts published in 1851:

1851.png

Figure 1: Places named in 37 U.S. novels published in 1851

There are some squarely canonical works included in this collection, including Moby-Dick and House of the Seven Gables, but the large majority are obscure novels by the likes of T. S. Arthur and Sylvanus Cobb. I certainly haven’t read many of them, nor am I likely to spend months doing so. The corpus is drawn from the Wright American Fiction collection and represents about a third of the total American literary works published that year. [Footnote: Why only a third? Those are all the texts available in machine-readable format at the moment.] Place names were extracted using a tool called GeoDict, which looks for strings of text that match a large database of named locations. I had to do a bit of cleanup on the extracted places, mostly because many personal names and common adjectives are also the names of cities somewhere in the world. I erred on the conservative side, excluding any of those I found and requiring a leading preposition for cities and regions, so if anything, I’ve likely missed some valid places. But the results are fascinating. Two points of interest, just quickly:

  1. For one, there are a lot more international locations than one might have expected. True, many of them are in Britain and western Europe, but these are American novels, not British reprints, so even that fact might surprise us. And there are also multiple mentions of locations in South America, Africa, India, China, Russia, Australia, the Middle East, and so on. The imaginative landscape of American fiction in the mid-nineteenth century appears to be pretty diversely outward looking in a way that hasn’t received much attention.
  2. And then—point two—there’s the distinct cluster of named places in the American south. At some level this probably shouldn’t be surprising; we’re talking about books that appeared just a decade before the Civil War, and the South was certainly on people’s minds. But it doesn’t fit very well with the stories we currently tell about Romanticism and the American Renaissance, which are centered firmly in New England during the early 1850s and dominate our understanding of the period. Perhaps we need to at least consider the possibility that American regionalism took hold significantly earlier than we usually claim.

So as I say, I think this is a pretty interesting result, one that demonstrates a first step in the kind of analyses that remain literary and cultural but that don’t depend on close reading alone nor suffer the material limits such reading imposes. I think we should do more of this—not necessarily more geolocation extraction in mid-nineteenth-century American fiction (though what I just showed obviously doesn’t exhaust that little project), but certainly more algorithmic and quantitative analysis of piles of text much too large to tackle “directly.” (“Directly” gets scare quotes because it’s a deeply misleading synonym for close reading in this context.)

If we do that—shift more of our critical capacity to such projects—there will be a couple of important consequences. For one thing, we’ll almost certainly become worse readers. Our time is finite; the less of it we devote to an activity, the less we’ll develop our skill in that area. Exactly how much our reading suffers—and how much we should care—are matters of reasonable debate; they depend on both the extent of the shift and the shape of the skill–experience curve for close reading. My sense is that we’ll come out alright and that it’s a trade well worth making. We gain a lot by having available to us the kinds of evidence text mining (for example) provides, enough that the outcome will almost certainly be a net positive for the field. But I’m willing to admit that the proof will be in the practice and that the practice is, while promising, as yet pretty limited. The important point, though, is that the decay of close reading as such is a negative in itself only if we mistakenly equate literary and cultural analysis with their current working method.

Second—and maybe more important for those of us already engaged in digital projects of one sort or another—we’ll need to see a related reallocation of resources within DH itself. Over the last couple of decades, many of our most visible projects have been organized around canonical texts, authors, and cultural artifacts. They have been motivated by a desire to understand those (quite limited) objects more robustly and completely, on a model plainly derived from conventional humanities scholarship. That wasn’t a mistake, nor are those projects without significant value. They’ve contributed to our understanding of, for example, Rossetti and Whitman, Stowe and Dickinson, Shakespeare and Spenser. And they’ve helped legitimate digital work in the eyes of suspicious colleagues by showing how far we can extend our traditional scholarship with new technologies. They’ve provided scholars around the world—including those outside the centers of university power—with better access to rare materials and improved pedagogy by the same means. But we shouldn’t ignore the fact that they’ve also often been large, expensive undertakings built on the assumption that we already know which authors and texts are the proper ones to which to devote our scarce resources. And to the extent that they’ve succeeded, they’ve also reinforced the canonicity of their subjects by increasing the amount of critical attention paid to them.

What’s required for computational and quantitative work—the kind of work that undermines rather than reinforces canons—is more material, less elaborately developed. The Wright collection, on which the 1851 map that I showed a few minutes ago was based (Figure 1), is a partial example of the kind of resource that’s best suited to this next development in digital humanities research. It covers every known American literary text published in the U.S. between 1851 and 1875 and makes them available in machine-readable form with basic metadata. Google Books and the Hathi Trust aim for the same thing on a much larger scale. None of these projects is cheap. But on a per-volume basis, they’re not bad. And of course we got Google and Hathi for very little of our own money, considering the magnitude of the projects.

It will still cost a good deal to make use of these what we might call “bare” repositories. The time, money, and attention they demand will have to come from somewhere. My point, though, is that if (as seems likely) we can’t pull those resources from entirely new pools outside the discipline—that is to say, if we can’t just expand the discipline so as to do everything we already do, plus a great many new things—then we should be willing to make sacrifices not only in traditional or analog humanities, but also in the types of first-wave digital projects that made the name and reputation of DH. This will hurt, but it will also result in categorically better, more broadly based, more inclusive, and finally more useful humanities scholarship. It will do so by giving us our first real chance to break the grip of small, arbitrarily assembled canons on our thinking about large-scale cultural production. It’s an opportunity not to be missed and a chance to put our money—real and figurative—where our mouths have been for two generations. We’ve complained about canons for a long time. Now that we might do without them, are we willing to try? And to accept the trade-offs involved? I think we should be.