Those who follow my work will know that I’ve been focused for a while on issues around the preservation and scholarly investigation of a variety of different kinds of born-digital materials, from electronic manuscripts to video games and virtual worlds. I’ve tried to be a bridge between the professional practitioners in the archives community and conversations ranging from digital forensics to media archaeology and textual scholarship. Recently, I’ve found myself engaged specifically with the problem of software as a class of born-digital materials: executable content. There’s been a rise in the level of community engagement around this topic, and some even more exciting developments are in the works. Here, however, I’ve grouped my contributions to this emerging discussion.
Last May, the Library of Congress sponsored a two-day meeting called Preserving.exe, dedicated to jumpstarting a national strategy around executable content. I reported on the meeting for Slate magazine in a piece called “History.exe“; my editor then asked me if I would be willing to put myself out there with my own “Top 10″ pieces of software I thought deserved long-term preservation, which I did: “The 10 Most Influential Software Programs of All Time.” The comments are the best part (really: my list was merely a pretense and catalyst for discussion, and, as expected, there are some awfully good suggestions in there). The official report from the Preserving.exe meeting is now out (PDF), and contains essays by Henry Lowood, Alice Allen, Peter Tueben, and myself. My piece, entitled “An Executable Past: The Case for a National Software Registry” (pages 12-22) offers a rationale for such an effort modeled (albeit with some crucial distinctions) on the well-publicized activities of the National Film Registry.
Back in September, Rhizome hosted a panel discussion at the New Museum on “Born-Digital Conservation in the Computer Age” featuring myself, Lori Emerson, and pioneering video and computer graphics artist Lilian Schwartz. We had a wide-ranging discussion, ably steered by Ben Fino-Radin (video of the event was recorded but is not currently available). Meanwhile, text of my remarks at the Library of Congress’s Electronic Literature Showcase last April is just downstream here: “Confessions of an Incunk.” In them I focus on electronic literature as software and executable content.
Finally, a long essay entitled “The .txtual Condition: Digital Humanities, Born-Digital Archives, and the Future Literary” is now out in Digital Humanities Quarterly. Among other things, the piece offers an intervention in the OAIS reference model by way of Wolfgang Ernst and media archaeology.
This is the text of a talk I gave on April 5, 2013 on the plenary panel at the Electronic Literature Showcase at the Library of Congress, curated by Kathi Inman Berens and Dene Grigar.
In a 2006 novel entitled Lisey’s Story, Stephen King recounted the thoughts of a best-selling writers’ widow as she fends off letters and phone calls from fans, collectors, and academics desperate to get their hands on her husband’s literary remains, his unfinished manuscripts, notes, drafts, correspondence—and the hard drives in his computers. “There were lots of words for the stuff Scott had left behind. The only one she completely understood was memorabilia, but there was another one, a funny one, that sounded like incuncabilla. That was what the impatient people wanted, the wheedlers, and the angry ones—Scott’s incuncabilla. Lisey began to think of them as Incunks.”
I come before you today, unapologetically, as an Incunk, that is one who has assumed archival and curatorial stewardship over the two electronic literature collections at my university (both, happily, from writers who are still among us, one of whom is even amongst us in this room today). In my remarks I want to candidly consider some of what is at stake in these transitions and transactions, as electronic literature passes from outsider practice to cultural heritage as sanctioned by its passage from private hands to an increasing number of major collecting institutions.
All of this is a relatively recent phenomenon. The Harry Ransom Center at the University of Texas at Austin, known for its collections in innovative 20th century writing, led the way by acquiring Michael Joyce’s papers in 2005. Joyce, of course, is often credited with writing the first extended work of hypertext fiction, 1987’s Afternoon: A Story, published by Eastgate Systems. The accession included some 60 manuscript boxes, as well as 371 floppy disks and three hard drives. Like all of the collections I will discuss today, the Michael Joyce papers are thus a hybrid archive, neither exclusively digital nor analog. As the first researcher to work with those materials, I found myself continually moving back and forth between page and screen, between manuscript materials of the sort cherished by any visitor to rare books reading rooms—including a precious sheet of handwritten notes and code fragments for Afternoon, complete with coffee cup stain—and the Ransom Center’s computer workstation, which I used to access the digital files that had been painstakingly migrated from those original floppies and hard drives. In 2007, the Maryland Institute for Technology in the Humanities (MITH) at the University of Maryland where I work was gifted with a substantial collection of computer hardware, storage media, hard-copy manuscripts, and other collectible material from the author, editor, and educator Deena Larsen. MITH is not an archives in any formal institutional sense: rather, it is a working digital humanities center, with a focus on research, technical innovation, and supporting new modes of teaching, scholarship and public engagement. While this posed obvious challenges for our stewardship of the Larsen Collection, we also saw some unique opportunities; in late 2011, Bill Bly, also an Eastgate author, agreed to gift his own considerable collection of electronic literature and author’s papers to MITH, where it now joins Deena’s materials. Duke University’s Rubenstein Library, meanwhile, has also begun collecting the Eastgate writers. A 2009 accession from e-poet Stephanie Strickland garnered some 5500 hundred items occupying over a dozen linear feet, including “journals and anthologies featuring Strickland’s poetry; TechnoPoetry Festival materials; schoolwork, college, and graduate papers; posters and programs from events; proofs and drafts of her writings; and audio recordings.” The materials also include a CD-ROM containing data.[i] A year later Duke acquired an even larger collection from Judy Malloy, yet another key figure in the electronic writing community, including drafts and documentation for works such as Uncle Roger and its name was Penelope, as well as journals, exhibition files, correspondence, and research notes for her non-fiction. This collection also contains a number of diskettes.[ii] Stanford University Libraries, meanwhile, acquired the papers of Infocom co-founded and interactive fiction pioneer Steve Meretzky in 2009. At 42 linear feet, it includes “design and game development documents, correspondence, paper files, electronic games, magazines, data disks, original game package artwork, PR and marketing materials, and miscellaneous electronic game industry memorabilia.”[iii] And the Bodleian Library at Oxford recently received materials related to the production and publication of William Gibson’s famous disappearing poem Agrippa from publisher Kevin Begos.
Yet what does it mean for an entity such as the Bodleian or the Harry Ransom Center to collect electronic literature? Should electronic literature be archived or should it be showcased? Should it be exhibited or should it be preserved? Should the data be migrated from one system to another, a step ahead of bit rot and the inevitable platform obsolescence, or should the original systems and machines be lovingly maintained, a curator pausing to squeeze a drop of oil into the old disk drive mechanisms from time to time while stockpiling spare boards, parts, and circuits from eBay? It’s tempting to take these rhetorical questions as invitations to indulge our instinct for indeterminacy, and celebrate the ways in which electronic literature continues to pose challenges, push boundaries, and confound the establishment. The reality, however, is that more and more libraries and archives are developing procedures and workflows for accommodating digital content regardless of whether or not they are actively engaged in the specific project of collecting electronic literature. It is difficult to imagine any public figure—author, artist, scientist, politician, or celebrity—with an active career from the 1980s forward whose personal “papers” do not include some form of computer storage media, or perhaps even entire computers. The most famous example to date is likely Salman Rushdie, who, despite having better motivations than many of us for safeguarding his privacy, has nonetheless transferred four of his old computers to Emory University’s Manuscripts, Archives, and Rare Books Library, where the hard drives have been forensically indexed, processed, and redacted, with selected content made available to researchers on a dedicated workstation in the reading room that supports a complete virtual emulation of the original operating environment, Macintosh System 7. This extreme amount of curatorial attention is clearly warranted both by Rushdie’s preeminence and the potential sensitivity of the contents of those hard drives, several of which date from the fatwa period; nonetheless, archivists are developing efficient workflows for processing digital materials at much larger scales, and such procedures and techniques are now a routine part of professional training in the archives profession. Many of the digital materials in collections I have been describing—notably word processing files and email correspondence—thus fit comfortably with the parameters of what is increasingly normalized professional practice.
Of course much of the most significant born-digital material related to electronic literature moves beyond data files and email; it is, in fact, properly speaking, software, that is to say executable programs dependent on an operating system and complete computing environment. Hypercard, Z-Machine, Storyspace, TADS, Macromedia Director, Java, Flash, Twine, and Processing are all obvious examples. Here best practices are less certain, but institutions are actively aware of the challenge, as the Library of Congress’s upcoming Preserving.exe meeting, to be held next month, amply demonstrates. Moreover, there is significant grassroots activity in the area of software preservation, most notably in the retro gaming community, where fans, hobbyists, and enthusiasts routinely build and maintain emulators and interpreters to keep obsolescent platforms alive. Emulation alone is not a panacea and introduces its own risks and challenges, not least of them being the ongoing upkeep and migration of the emulators themselves. Emulation also always entails mediation—more so than most people realize, with everything from sound effects to processor speeds potentially a variable. Notably the Smithsonian’s recent Art of Video Games exhibition chose to forego emulation and run the various games it was featuring from as much of the original hardware as possible with concessions only to large-format display screens. Yet more than a decade ago, in a prescient essay entitled “The Hard Work of Software History,” Stanford University Libraries curator Henry Lowood made the point that the technical challenges involved in software preservation would likely ultimately be subordinate to the tribal differences manifest in competing practices and priorities among libraries, archives, and museums, all of which have claim to the work of preserving software. He writes: “[W]hile the relationship of software to hardware, its storage on physical media, or its association with artifacts such as disks, computers, and boxes, might lead one to think of software as fit for the museum, requirements of scholarly access such as identifying and locating sources, standards of indexing and meta-data creation, and maintenance of collections for retrieval and interpretation seem more in line with the capabilities and programs of libraries and archival repositories” (160). The material or artifactual dimension of electronic literature— and we experience this so vividly with Deena Larsen’s myriad craft-based writing practices—adds yet another layer of curatorial considerations. Electronic literature thus sits comfortably neither in the rare books reading room nor in the computer history museum. But while electronic literature thus poses significant curatorial challenges when it takes the form of executable software, it is less clear to me that those challenges are in any way unique vis-à-vis how other communities of interest, such as gamers, are also contending with the problem.
We can take satisfaction in the knowledge that the papers and records of authors such as Joyce, Larsen, Bly, Malloy, and Strickland, as well as Gibson and Meretzky, now reside safe in the alabaster chambers of some of the world’s preeminent cultural heritage institutions. Yet this development also entails trade-offs and compromises. All of these institutions would have obtained a signed document known as a deed of gift serving to legally transfer ownership of the collection materials from the donor to the archive. While the particulars of deeds of gift of course vary, it is commonplace to include a clause dedicated to the de-accession of collection materials. This empowers the institution to discard that which it deems to lie outside the scope of its collecting strategy. In such cases control over collective memory has been legally ceded. Moreover, while the Ransom Center, Duke, and the University of Maryland all afford varying degrees of access to the electronic literature materials in question, the level of access varies and in all cases is more restrictive than would be the case were the material still to be in the hands of its original creators. First, of course, one must physically travel to the institutions in question. While some born-digital materials can be made available through the Web, the norm is for them to be provided on dedicated computers in the reading room, often with network and peripheral ports disabled—to prevent a patron from copying the contents of a collection to a USB stick, for example. Thus to examine one of Michael Joyce’s digital drafts I still must make the pilgrimage to Austin, just as if I wanted to examine the hard copy manuscripts of that “other” Joyce, who is also collected by the Ransom Center. Policies will also vary as to the amount of assistance available in accessing legacy electronic formats. Sometimes a digital file might be presented as what’s known colloquially as a BLOB, a Binary Large Object. No effort will have been made to migrate the content to a contemporary format. While such objects can be inspected using utilities like hex viewers, they will not perform as executable programs. Likewise, some institutions may provide a complete bitstream image of an original diskette which allows for mounting and emulating the original media, as well as various forms of forensic analysis, while others may offer access only to individual files. At present MITH at the University of Maryland is the only one of these institutions that would allow a researcher to actually insert an original diskette into a vintage computer and access content by flipping the power switch. Finally, typically a collection will not be available until after it is processed, meaning finding aids have been prepared and all of the material cataloged and described according to the institution’s internal standards. Collections processing requires staffing, and in this age of increasingly scarce resources more and more collections are lying in unprocessed limbo, accessioned by an institution that lacks the resources to process them for public access. I am happy to report, however, that with the exception of the Meretzky papers at Stanford all of the electronic literature collections I have been discussing have been processed and are open to patron use. Even so, there is no guarantee that a writers fonds will all be collected at the same institution; both Larsen and Malloy, for example, have also gifted materials to the University of Colorado’s Media Archaeology Lab, meaning future scholars must needs visit more than one locale.
I began with Stephen King. But what’s really at stake in all this is what another novelist, A. S. Byatt, once summarized in the title of her brilliant novel of literary remembrance and collecting as possession. The term cultural heritage is a polite one that masks all manner of impolite and impolitic but inescapably political questions, not least of which are: Whose culture? And whose heritage? And above all, who gets to keep it, to have it, literally to hold it, to possess it in their hands?
These are impassioned issues, and tensions around them, even among friends, will sometimes run high. Byatt’s possession, after all, is also finally about obsession, the word we give to the haunting, sometimes seemingly otherworldly negotiation of memory, desire, and experience. This Incunk believes we can negotiate the balance between preservation and access, between exhibition and safekeeping, between the showcase and the vaults. This negotiation is without a doubt “emergent,” to use Kathi Berens’s word from her curator’s statement for this week’s Showcase; but it is also a negotiation that is emerging within the context of a much wider conversation in the archives and cultural heritage community, for instance under the auspices of the Library of Congress’s National Digital Stewardship Alliance where practitioners from a wide array of different domains and subject areas collectively look toward the future of our oh-so-recent digital past. Electronic literature, outsider practice thought it still may be, has more in common with these other spaces and domains than it has apart from them.
I wrote this short piece for Slate about my pick for the first novel written with a word processor—a teaser from Track Changes.
This is the text of a brief (5-minute) talk I delivered today at the Personal Digital Archiving 2013 conference as part of a panel entitled “All Your Bits Aren’t Belong To Us: Opportunities and Challenges of Personally Revealing Information in Digital Collections.” Fellow panelists were Cal Lee, Naomi Nelson, and Kam Woods. The ideas could bear some further development—five minutes isn’t a lot of time—but the basic objective was to disrupt the linear donor-archivist axis that predominates in these discussions and instead consider the issue in the context of a networked landscape populated by human and non-human actors alike. I get there via Jerome McGann, Manuel DeLanda, big data, and a touch of OOO.
Three years ago, in this very room, Clifford Lynch, closing out a Mellon-funded conference on Digital Forensics and Cultural Heritage, the first ever of its kind, stood where I am standing and asked “How many of you would like to be the target of a forensic investigation?” The implication was clear: by putting ourselves in the shoes of our donors, we’d immediately come back down to earth and gain some perspective on just what digital forensics means for how we process and steward the collections entrusted to us.
Most discussions to date of the place of ethics in cultural heritage applications of digital forensics posit just such a negotiation between donor or originator, and archivist. This is certainly the case in the CLIR report we published in the aftermath of that meeting. In the ideal circumstance, the donor communicates his or her wishes and intentions, and the archivist does his or her professional utmost to see them through. In the few minutes I have today I want to speak on behalf of the scholar, or patron of those collections. “Scholarship,” notes Jerome McGann, “is a service vocation. Not only are Sappho and Shakespeare primary, irreducible concerns for the scholar, so is any least part of our cultural inheritance that might call for attention. And to the scholarly mind, every smallest datum of that inheritance has a right to make its call. When the call is heard, the scholar is obliged to answer it accurately, meticulously, candidly, thoroughly.” What is notable about this statement from McGann, apart from its characteristic passion and urgency, is the notion, subtly rendered, that data themselves—some bit, some shard, some small thing forgotten—has the potential, and indeed the right, to “make its call,” that the very stuff and matter of the cultural record is itself vested with agency in this negotiation. Scholarship is thus a vocation in the service of the inanimate. Not just the memory and shades of Sappho and Shakespeare, but their irreducible physical remainder.
To elevate the inanimate to a position of privilege in a conversation about ethics may seem perverse. Yet we live in a moment when machines are also actors, not figuratively but operationally, as constituents of the new everyday. Our philosophies tell us so, whether the networks of Bruno Latour or the “machines speaking to machines” once posited by Felix Guattari and realized now in our most quotidian interactions with the World Wide Web. Or our key chain’s car fob. Two decades ago, in the preface to a book entitled War in the Age of Intelligent Machines, Manuel DeLanda posited a robot historian who would excavate the so-called machinic phylum for evidence of the clockwork past amid the sentient data grids of our near future. Today, automated agents “crawl” the Web, mining, scraping, and harvesting; and sometimes we deploy text files called “robots” to ward against them. A new movement in philosophy, speculative realism or object-oriented ontology, enjoins us to abandon “correlationist” worldviews—the notion that experience is only meaningful when aligned with the human consciousness and sensorium—and ask instead “what it’s like to be a thing?” A shoelace? A shovel? A pressing of Jack White’s “Sixteen Saltines” single?
Objects and our relations to them change over time. All objects, over time, eclipse human agents in their primacy. Digital objects merely accelerate the process. Our hard drives leverage engineering of unthinkable complexity to inscribe signals irrespective of whether they are the product of human users or (as is nearly always the case, statistically) the churn of the operating system. They do this many thousands of times a second, at something near the nano-scale. The durations and density of a bitstream thus bear little relevance to the human sensorium. In a book I published a little while back I termed this the forensic imagination. I invoked the brilliant maverick artist and cognitive scientist Michael Leyton for his contention that we are prisoners not of the past, but of the present—meaning that, in his words, “all cognitive activity proceeds via the recovery of the past through objects in the present” (2). That is the scholar’s vocation. As text mining and similar techniques are brought to bear on collections of cultural heritage data, we will “read” not just at the level of individual datum, but will seek to infer, to triangulate, to extrapolate, and individuate. Data will inevitably be loosed from its moorings to particular historical personages. This is the true realization of DeLanda’s robot historian—not some anthropomorphic iron giant, but the sense-making algorithms of social and semantic computing at scale. The bitstream that is consigned to a dark archive out of respect for an individual donor’s wishes may hold the key for unlocking patterns of knowledge that are hopelessly opaque or oblique when considered on the scale of mere human agency. So let us bear that in mind too when we talk of ethics and donors. The robot historian of my title is the brute-force implementation of that which McGann once termed “the scholar’s art.”
February 6 I’m running an introduction to MLA Commons for the UMD English department. We’re doing a BitCurator panel on ethics and digital forensics at the Personal Digital Archiving conference February 21-22. I’ll be giving a talk with Wendy Chun at NYU on March 1, both of us speaking under the general rubric of Media Archaeology. I’ll also be doing a workshop the afternoon before the event called “8-Bit DH: Locating the Literary History of Word Processing.” On March 21, I’ll be at McGill University to share current work from Track Changes for their annual Digital Humanities Lecture; something is also shaping up at Concordia the day beforehand, will add details when I have them. On April 5 I’ll be at Library of Congress for their Electronic Literature Showcase. On April 6 I’m one of the plenary speakers for the UMD Graduate English Organization’s conference on “(Dis)Realities and the Literary and Cultural Imagination” (my talk: “What Was Digital Humanities?” On April 25, I’ll be at Yale for the History of the Book Program, and will stay on to speak at the Beinecke’s conference on Beyond the Text: Literary Archives in the 21st Century that weekend.
Text of my talk at the 2013 MLA Presidential Forum Avenues of Access session on “Digital Humanities and the Future of Scholarly Communication.” Slides (PDF) are available here. [Also on MLA Commons.]
I attended my first MLA convention in 1996. I was a PhD student at the University of Virginia in Charlottesville at the time, and the MLA was just up the road in Washington DC. A bunch of us carpooled. Like many first time attendees, I had earnestly mapped out my convention schedule in the space provided at the front of the program. One session in particular stood out to me to me, premediated (as we now say) with arena rock-like production values: [SLIDE]
“The Canon and the Web: Reconfiguring Romanticism in the Information Age,” organized by Alan Liu and Laura Mandell, two names very familiar to contemporary observers of the digital humanities. [SLIDE] The Web presence you see here—one of those distant mirrors of my title—had been placed online months prior to the convention, on May 26th to be exact. As Liu commented in a recent email to me, “I am struck (as you are) by how fleshed out that panel site was.” There are animated GIFs and gratuitous tables, yes, but there is also an evident will to situate the session amid a thick contextual network (the links to associated projects, related readings, and relevant sites); there is a clear desire for interactivity, as expressed through the live email links and the injunction to initiate correspondence; and there is also a curatorial sensibility which seems very contemporary to me, [SLIDE] most notably through the “Canon Dreaming” links which take users to collections of materials assembled by Liu and Mandell’s students—certainly prototypes of what we would today realize through, say, an Omeka installation. [SLIDE] Finally, there were the participants, complete with links to email and home pages alongside the thumbnail bios, an obvious early instantiation of the ubiquitous user profiles we routinely create for social media services. The names here reflect many who have continued to be thought leaders and key practitioners in the space we now call DH, and they were a key element of the panel’s appeal.
The real backbone of scholarly communication at the time remained listserv email. There was a rawness to it. You subscribed and you were on, sometimes pending moderator approval, usually more for spam control than anything else. Once you were on, you posted. Or lurked. Or flamed. Or accidentally hit reply-all when you meant to backchannel. But you didn’t have to worry about how many followers you had or if you were popular or pithy enough to be retweeted. You didn’t have to ask someone else if you could be their friend in order to converse with them. Strange, down-the-rabbit-hole geographies of influence formed, where the mainstay of a list would turn out to be a graduate student, or an emeritus at an obscure institution in New Zealand.
[SLIDE] The starting point for any discussion of email in the digital humanities must be the venerable HUMANIST listserv, whose first substantive message [SLIDE] is time-stamped 14 May 1987, 20:17:18 EDT, from one MCCARTY@UTOREPAS. This is, of course, Willard McCarty, who still edits the list to this day. HUMANIST: the name itself reminds us of a time when it was hard to conceive of a need for more than one listserv serving the academic humanities, that most general of titles serving to distinguish it from, say the LINGUIST list, which came along in 1990.
Humanist remains active today, its digests delivered regularly to, I suspect, a number of inboxes in this room. [SLIDE] Some of you may also follow an account on Twitter dubbed @hum_comp. The owner of this account, who wishes to remain anonymous, [SLIDE] is chronologically culling the Humanist archives, starting with the earliest entries, for tweet-length tidbits that either seem quaint or remain relevant. “I was immediately struck,” said the account owner to me via email, “by how similar they sounded to the conversations I [have] been listening to, and tentatively engaging in in 2009/2010. . . [T]he early HUMANIST listers saw much clearer than I ever did in the 1980s and 1990s what the challenges and promises of humanities computing and communication really were, right from the beginning of an expansive networked communication.” Thus: [SLIDE] [SLIDE]
The longevity of Humanist notwithstanding, by 1998 it was clear that many of the scholarly listservs that had sprung up in the first half of the decade were already living out their use horizon. One of the lists that was important to me at the time was entitled H-CLC, which was part of the H-Net consortium. [SLIDE] The CLC stood for Comparative Literature and Computers. In late November, 1997 Nelson Hilton wrote to it: “A growing number of inactive lists, it seems, have folded their tents and disappeared into the electronic night after a ritual call into the void met with resounding stillness. Perhaps it is a sign of advanced maturity in the medium — novelty has long worn off and we return to work at hand. If a list speaks to that work, we pause and read, perhaps respond — otherwise, quite rightly, why bother?” (Mon, 3 Nov 1997 10:33:21) “A sign of advanced maturity in the medium,” Hilton mused. I remind you, this was 1997.
The next Great Migration was to the blogosphere. Even as the lists were folding, the blogs began to spin up, built on nascent social networking scaffolding in the form of blogrolls, comments, trackbacks, and RSS feeds. Blogs are still very much with us today of course, but these were the salad days. Not WordPress or Blogger but Movable Type. Remember MT? [SLIDE] Or maybe you hacked your own. Some bloggers seemed to exist only in the interstices, “comment blogging” as some of us called it, living out their online identities in the long trellis of text that dangled from the bottom of prominent postings. Group blogs were a particularly notable feature of the scholarly landscape, with venues like The Valve, Crooked Timber, Wordherders, and Cathy’s HASTAC becoming daily reads for many wired academics. [SLIDE] For me, the most important such experiment was GrandTextAuto, which for five or six years was host to numerous important and intense conversations in digital studies, electronic literature, computer games, procedural literacy, and what we nowadays call critical code studies and software studies. [SLIDE] Its “drivers,” who included Scott Rettberg, Nick Montfort, Noah Wardrip-Fruin, Andrew Stern, Michael Mateas, and Mary Flanagan, created an active and energetic user community, one with sufficient gravitas and presence to spur the MIT Press to use it as the platform for the open peer review of Wardrip-Fruin’s first monograph, Expressive Processing. Today when I hit the site I get this. [SLIDE] When I emailed Nick to ask for his thoughts he said this to me, among other things: “Those of us who started out as graduate students took on professorial and administrative responsibilities, and there was less time available to play gadfly, perpetrate April Fool’s jokes, and explore aspects of projects through online discussion — instead, we had to write grants, teach classes, supervise graduate students, and so on. So, the heavy burden of responsibility, growing up, and so on.” For my part, I think the rise of Twitter also had a strong impact on the vitality of the blogosphere, even though, as I’ve written elsewhere, blogs and Twitter co-exist with one another in powerful mutually enabling ways.
[SLIDE] Which brings me to the new MLA Commons, launching this weekend here at the convention. [SLIDE] The MLA announcement states that it is intended “to facilitate active member-to-member communication” and “offer a platform for the publication of scholarship in new formats”—language that echoes, among other early electronic exemplars, the Welcome message from the Humanist listserv so long ago. The MLA Commons is built out from the CUNY Commons in a Box package which is built on top of the BuddyPress extensions to WordPress which itself depends on the Linux-Apache-MySQL-PHP LAMP stack of my title, four bedrock open source technologies that all existed, but were in their infancy, at the time of that 1996 Canon and Web panel. Working from the distant and very partially mirrored history of online scholarly communication I’ve been sketching, there are a few propositions I want to leave you with.
First, access always engenders power. Power dynamics are built in to our social networking services at the most basic level—indeed, the ability to define and operationalize various strata of relationship functions—trust, visibility, reciprocity—is arguably at the heart of the read/write Web. [SLIDE] To wit: for the past few days my email inbox has been regularly populated by messages from “MLA Commons” with the subject heading “New contact request.” [SLIDE] I’ve accepted them all, including the ones from people I don’t really know, because the platform is new and my instinct is to err on the side of openness. But I don’t accept all Friend requests on Facebook (nor are all of my earnest solicitations accepted) and I don’t follow back everyone who follows me on Twitter even as I do follow people I’d give my eye teeth to have follow me—but alas, they don’t. As MLA Commons gains steam, at what point do the filters go up, to an extent replicating existing power dynamics in the profession? Do I accept contacts from everyone in my home department? Does someone with a convention interview send contact requests to members of a search committee? If they do, is it presumptuous? If they don’t, are they anti-social in the most literal sense? Do I accept all contact requests from those of a higher professional rank than me? Accept no contact requests from those of lower rank? Anyone who manages their relationships so coarsely is missing the point and very likely has life problems far greater than their facility with online social networking, but given that reputation metrics (whether in the form of Web 2.0 services like Klout or scholarly rankings such as the social sciences’ H-Index) are operationalized in the very marrow of our online media, including digital publications and citations, we’d be mistaken to believe that these concerns are extraneous or can be entirely sidestepped by any mature scholarly communications network. It’s not a reason not to have a Commons, but it is a reason to be mindful of the relational and procedural models built in to its fabric, especially given the varied technologies the end-user experience rests upon.
Access also always entails risk. [SLIDE] I have had a home page on the Web since the summer of 1995. [SLIDE] In 1997 I began writing my dissertation online, not quite “live,” more of a time-shift as I edited and managed different document technologies, but always posting the prose in full, not just excerpts. [SLIDE] My inspiration here was Harlan Ellison, who regularly wrote short stories seated in the window of community bookstores. In 1977 he did it for a full week, a story a day, in a Los Angeles bookshop. I recognized early on that the Web had the potential to be an even more perfect panopticon; no great stretch, but I was also in touch with enough of my baser instincts to understand that visibility and feedback the project seemed likely to attract would entice me and keep me going. Inevitably I was asked whether I was worried about people plagiarizing my work. Again, this was in the era before blogs, and indeed most middle-state writing online; the equivalent of the blog essay, or “blessay” as Dan Cohen has called it, was mostly confined to listservs and text files distributed via FTP drops—very different from the agora of an open Web indexed by even pre-Google search engine technology. My response to the question about plagiary was to invoke another literary authority, specifically Poe’s purloined letter. [SLIDE] Hiding my ideas in plain sight, I argued, was the single best way to get them into circulation and ensure the necessity of referencing and citing them (as opposed to merely swiping them). And for the most part, I was right. The experiment paid real professional dividends. It got my work an audience, airtime in front of the eyeballs I most wanted for it, and the work was judged usually though by no means universally favorably. More I could not ask for, and I carried the same ethos over into my blogging, which began in early 2003. [SLIDE] From the outset I blogged under my own name, and I wrote a response—a blessay, I suppose—explaining why when Ivan Tribble (remember all that trouble with Mr. Tribble?) penned a 2005 piece for the Chronicle of Higher Ed called “Bloggers Need Not Apply.” In the comments I asked others to weigh in with examples of the professional dividends their online identities and reputations had reaped, and I collected several dozen instances further testifying to the power of, as the title of Phil Agre’s classic text has it, “networking on the network.” [SLIDE] I got onto Twitter in 2006, again perhaps just ahead of the adopters’ curve. Nonetheless, I come before you today to say this: I have not blogged every good idea I have ever had. I have not tweeted every insight or reference or revelation. There’s stuff I keep to myself, or better, stuff I release strategically rather than spontaneously, and it will fall to all of us, on MLA Commons and elsewhere, to find our own personal and professional comfort zones regarding what we give out to our contacts and groups, the membership at large, the public at large. Access always entails risk, and while we know scholarship is not a zero-sum game, more tangible and no less sustaining forms of reputation and reward sometimes even often are.
Finally, access always requires time. One scenario, I suppose, is that our online interactions will eventually progress to the point that genre and platform dissolve. Our work becomes our discourse stream, and a comment, a tweet, a blessay, or a monograph will all become part of it, winding more or less tightly within and amid the equally interwoven discourse strands of others. Fuzzy mathematics measuring out influence, recognition, and trust will iterate and calculate in real-time as our conversations thrive, hum, surge, and falter in the collective hive. Rivals will wage guerilla flame campaigns across the endlessly reticulating strata of the network, their memes warring with one other, spawning algorithmic eddies and tides that will buffet entire fields and disciplines. Tenure, if it still exists, may be based on cycles of attention rather than the accumulation of publications—propagate or perish. But I don’t think so. For one thing, platforms are perhaps further from interoperability than ever; Bruce Sterling, for example, predicts that 2013 will be the year of silos and tactically burned and broken bridges. Genres may indeed fray around the edges, but in the end every comment, every sentence blogged and logged, every essay chapter finished and book published is a withdrawal from the finite bank of productivity that is shaped by our individual intellects, our irreducible bodies with all their vulnerabilities, our jobs, our responsibilities, our loved ones, our hobbies and distractions and vacations and time in the woods. [SLIDE] Which is to say that however much attention spans may be redefined and reconfigured—psycho-cognitively, neuro-chemically, and otherwise—most scholarly careers, for now and the foreseeable future, are still going to be measured out in fairly pedestrian ways, in the summer months and three-day weekends and that rare “empty” day during the semester, perhaps on fellowship or sabbatical if we’re lucky and privileged, often at the expense of an hour’s sleep or a movie with the rest of the family when we’re not. Our scholarly communications networks will either cohabitate with the myriad obligations of that ticking temporal complex, or they will lie fallow—most of us are too far out as it is, and not Google Waving but drowning.
So I would leave you with one final urging—that it’s not too soon for MLA Commons to be planning for its own planned obsolescence, or what Beth Nowviskie and Dot Porter have termed “graceful degradation.” Our social (and our scholarly) networks are ever more porous, but they have yet to become reliably portable. This is a problem. Relationship economies require enormous amounts of attention and investment, and for those of us who live active online professional lives, a non-trivial amount of time is devoted to cultivating and aggregating those networks in diverse, subtle, and not-so-subtle ways. [SLIDE] Many of you have seen your own online networks shape-shift across listservs, blogs, and now the newest social media platforms. Universal avatars, imported contact lists, these are stop-gap measures; more promising for the next phase of scholarly communication may be mature personal unique identifiers, [SLIDE] as promised by ORCID, a service that “distinguishes you from every other researcher and, through integration in key research workflows such as manuscript and grant submission, supports automated linkages between you and your professional activities.” I’m currently on Twitter, Slideshare, Zotero, Google+, Facebook, and DH Answers, to name just a few. I want to migrate and port not just my content but also my reputation and relations. Regardless of if or when the MLA Commons folds its tent and decamps into that vast electronic night—hopefully not for a great many attention cycles—what should endure are the relationships it fosters and the work thus performed, the “work at hand” as Hilton put it on H-CLC. [SLIDE] Thank you and yes, do join, and do send me your contacts.
Nobody teaches you how to write a book. Yes, in graduate school, you may get “feedback” on your dissertation to a greater or lesser extent from mentors and peers. But that typically has very little do with the process of executing on a marketable book project (and even scholarly monographs have to be marketable, all the more so in the current publishing climate). So writing—making—a book is something most of us figure out on our own, as an assistant professor, on the tenure clock. In my case my first book, Mechanisms, had relatively little in common with my dissertation—really only a single chapter (some of the work on Afternoon if you’re wondering). But even though I had an advance contract for it, I didn’t really know what I was doing. It was very much a process of feeling my way, stepping along from one passage, paragraph, section, and chapter to the next. I often refer to Mechanisms as my “kitchen sink” book, meaning I thought I had to get every interesting thing I ever knew about computers and textuality into it or it would somehow be less than it should have been.
The backend of the project, meanwhile, was a mess. I wasn’t using citation management software. I kept drafts in bloated Word files with titles like “chapt 3 notes and fragments” that I would occasionally pick through to make sure I wasn’t overlooking anything worthwhile. I didn’t use an outliner. I didn’t use Zotero or EverNote. Online sources? PDFs sat in folders alongside of HTML scraped from the Web using my browser’s Save As function. Somehow, though, I found my footing along the way and worked through to completing the text as a manuscript (which subsequently underwent review, revision, and copy-editing). At the last minute I sent frantic emails soliciting permissions for images. Luckily my publisher took care of the index.
Track Changes, the book I started a year ago, is different. Like all of my projects I began with questions, questions that I honestly couldn’t answer for myself. I genuinely wanted to know who was the first author to write a novel with a word processor. (And now I believe that I do.) But I also began with a fairly complete sense of the topic space I wanted to cover, namely the literary history of word processing. I knew there would be a beginning, or a series of beginnings, as I covered the early adopters. I knew the arc of the project would conclude in the present day, with emergence of tools like Scrivener and the dissolution of desktop software into the so-called cloud. I knew there would be interesting stories and personalities along the way, though I didn’t yet know what most of those were or who they would turn out to be. But still: I could see the shape of the whole as a “book,” and I mean that quite literally: I could (and can) visualize it as an object in the world, something that I was only able to do with Mechanisms much later in the process.
Unlike many newly-promoted associate professors, I didn’t immediately begin work on “the next project.” Or to put it more accurately, I didn’t immediately begin work on the next book project. I did do lots of writing though, including this and this and this. And I also helped plan, launch, and run this for two years. By the time I was getting ready to start work on Track Changes in earnest, with the support of a year’s research leave on a fellowship, I felt I had gotten some healthy distance from the experience of writing Mechanisms. I was also, of course, working without the immediate pressure of a tenure committee, and believe me, that makes a huge difference in one’s attitude towards a major writing project. One choice I immediately made was to reach out for a more general audience, and I was happy to sign a contract that would bring the book out on the well-regarded trade list of an excellent university press.
I also knew that I would have to do a different kind of work and a different kind of writing. Whereas Mechanisms was primarily concerned with producing interventions in certain ongoing scholarly conversations, this is a book where I wanted to answer questions and tell some important stories. I knew I would have to talk to people, namely the writers and technologists who were my subjects. I also knew that I would have to do some archival work, and so I began making plans to go places, including the Microsoft campus. I knew I would need to spend time going through trade literature (publications like Writer’s Digest) as well as popular computer journalism (Byte, PC World) and venues like The Paris Review. And finally I knew I wanted there to be a hands-on experiential component to the research, actually acquiring and working on some of the antiquated systems I was going to be writing about. So I planned to expand my collection of vintage computers.
Much of the fall semester was spent laying this groundwork. Then, in December, at the invitation of Doug Reside and Ben Vershbow, I gave a talk at the New York Public Library, based on my first completed chapter on Stephen King and his well-publicized use of a Wang word processor (and yes, he beat all the rest of us to the jokes). There was a New York Times reporter in the audience who published an excellent story based on the talk, which came out on December 25th, Christmas Day (a slow news day, I was fond of reminding people). This immediately and irrevocably changed the character of the project, which was suddenly massively public—not only for the exposure in the Times, but for all of the syndicated papers and the aggregators that picked it up.
The next morning my inbox had over one hundred messages and in the days that followed they kept coming in, especially as other outlets picked up the story, requested interviews, and wrote their own stories. I will say unequivocally that Track Changes is a better, more complete, more accurate, and frankly more fun and interesting book than it ever would have been without this media exposure. People who I never would have had access to otherwise were eager to talk to me because they wanted their story included (and rightly so, in most cases). There was incredible generosity. References, source materials, and introductions all landed in my inbox. I began doing lots of oral history interviews. I also made the decision to commit to EverNote in a serious way, and began the process, after several false starts, of organizing my research materials, using a system of notebooks and tags that made sense to me.
Everything went into EverNote: PDFs, images, Web clippings, stray URLS, notes to myself, audio files, interview transcripts, email (which EverNote can ingest automatically upon forwarding to a dedicated address)—everything. The great power of the software is, of course, that it is searchable, and so a simple string search on, say, “Asimov” (he has a particularly interesting word processing story) instantly put dozens of entries at my fingertips. I also made sure to keep up with another resource I had begun early on, a spreadsheet documenting information about individual writers and their first computer or word processing systems. For well over one hundred authors I now have data as to when they got their first computer, what it was, and what was the first thing they wrote and published on it (this will go into the book as an appendix). In LibraryThing I created a dedicated tag for trackchanges, allowing me to browse a virtual bookshelf of titles in my collection related to the project, as well as export a catalog (which subsequently went into EverNote). I started to regularly use the #trackchanges tag on Twitter, and also began a Tumblr blog (which I’ve kept up with in fits and spurts—something I wish I could be more diligent about).
All of this, I should emphasize, takes time, and there’s really no one who can show you how to do it—you just sort of have to hack out your own systems and workflows. The result of that investment, though, is a cloud-based cross-platform research infrastructure that keeps me organized and sane. For actual writing I spent some time with Scrivener, badly wanting to adopt it, but eventually reverted to Word, owing (mostly) to Scrivener’s limitations with citation management. By later in the spring, the publicity from the press exposure had slowed down. I had over two dozen oral histories in the bag; I had been to Microsoft; I purchased a 200-lbs. specimen of the first machine IBM ever marketed as a word processor on eBay and moved it into my campus office. My EverNote installation had swelled to nearly a thousand entries.
Track Changes is on track for delivery to my editor in early 2013. I will be giving some talks based on the work in progress at the University of Maryland this fall, as well as the University of Colorado at Boulder and the University of Ghent (last spring I gave talks at the University of Toronto and Western Ontario, which provided valuable occasions for early feedback, along with the NYPL). My point in writing this up, though, is two-fold. One, because a year ago I asked rather publicly to be left alone for a while—and everyone, colleagues, co-workers, friends near and far has been most accommodating—so I wanted to say something about what I’ve been up to. And second, to encourage others to talk more about their own workflows and systems and tools, the kind of thing we don’t do such a good job of teaching our graduate students or mentoring our younger colleagues about, as though the process of writing was devoid of material tradecraft.
The book I’m writing now is, of course, all about that tradecraft, and its impact on a generation of authors who were there at the transition between typewriters and notebooks and legal pads and the writing machines most of us also now use today. Its been a great ride for the last year. I have some great stories to tell, and I can’t wait to share them with the rest of you.