Category: Scholarly Communication

Treading Water on Open Access

A statement from the governing council of the American Historical Association, September 2012:

The American Historical Association voices concerns about recent developments in the debates over “open access” to research published in scholarly journals. The conversation has been framed by the particular characteristics and economics of science publishing, a landscape considerably different from the terrain of scholarship in the humanities. The governing Council of the AHA has unanimously approved the following statement. We welcome further discussion…

In today’s digital world, many people inside and outside of academia maintain that information, including scholarly research, wants to be, and should be, free. Where people subsidized by taxpayers have created that information, the logic of free information is difficult to resist…

The concerns motivating these recommendations are valid, but the proposed solution raises serious questions for scholarly publishing, especially in the humanities and social sciences.

A statement from Roy Rosenzweig, the Vice President of Research of the American Historical Association, in May 2005:

Historical research also benefits directly (albeit considerably less generously [than science]) through grants from federal agencies like the National Endowment for the Humanities; even more of us are on the payroll of state universities, where research support makes it possible for us to write our books and articles. If we extend the notion of “public funding” to private universities and foundations (who are, of course, major beneficiaries of the federal tax codes), it can be argued that public support underwrites almost all historical scholarship.

Do the fruits of this publicly supported scholarship belong to the public? Should the public have free access to it? These questions pose a particular challenge for the AHA, which has conflicting roles as a publisher of history scholarship, a professional association for the authors of history scholarship, and an organization with a congressional mandate to support the dissemination of history. The AHA’s Research Division is currently considering the question of open—or at least enhanced—access to historical scholarship and we seek the views of members.

Two requests for comment from the AHA on open access, seven years apart. In 2005, the precipitating event for the AHA’s statement was the NIH report on “Enhancing Public Access to Publications Resulting from NIH-Funded Research”; yesterday it was the Finch report on “Accessibility, sustainability, excellence: how to expand access to research publications” [pdf]. History has repeated itself.

We historians have been treading water on open access for the better part of a decade. This is not a particular failure of our professional organization, the AHA; it’s a collective failure by historians who believe—contrary to the lessons of our own research—that today will be like yesterday, and tomorrow like today. Article-centric academic journals, a relatively recent development in the history of publishing, apparently have existed, and will exist, forever, in largely the same form and with largely the same business model.

We can wring our hands about open access every seven years when something notable happens in science publishing, but there’s much to be said for actually doing something rather than sitting on the sidelines. The fact is that the scientists have been thinking and discussing but also doing for a long, long time. They’ve had a free preprint service for articles since the beginning of the web in 1991. In 2012, our field has almost no experience with how alternate online models might function.

If we’re solely concerned with the business model of the American Historical Review (more on that focus in a moment), the AHA had on the table possible economic solutions that married open access with sustainability over seven years ago, when Roy wrote his piece. Since then other creative solutions have been proposed. I happen to prefer the library consortium model, in which large research libraries who are already paying millions of dollars for science journals are browbeaten into ponying up a tiny fraction of the science journal budget to continue to pay for open humanities journals. As a strong believer in the power of narcissism and shame, I could imagine a system in which libraries that pay would get exalted patron status on the home page for the journal, while free riders would face the ignominy of a red bar across the top of the browser when viewed on a campus that dropped support once the AHR went open access. (“You are welcome to read this open scholarship, but you should know that your university is skirting its obligation to the field.” The Shame Bar could be left off in places that cannot afford to pay.)

Regardless of the method and the model, the point is simply that we haven’t tried very hard. Too many of my colleagues, in the preferred professorial mode of focusing on the negative, have highlighted perceived problems with open access without actually engaging it. Yet somehow over 8,000 open access journals have flourished in the last decade. If the AHA’s response is that those journals aren’t flagship journals, well, I’m not sure that’s the one-percenter rhetoric they want to be associated with as representatives of the entire profession.

Furthermore, if our primary concern is indeed the economics of the AHR, wouldn’t it be fair game to look at the full economics of it—not just the direct costs on AHA’s side (“$460,000 to support the editorial processes”), but the other side, where much of the work gets done: the time professional historians take to write and vet articles? I would wager those in-kind costs are far larger than $460,000 a year. That’s partly what Roy was getting at in his appeal to the underlying funding of most historical scholarship. Any such larger economic accounting would trigger more difficult questions, such as Hugh Gusterson’s pointed query about why he’s being asked to give his peer-review labor for free but publishers are gating the final product in return—thanks for your gift labor, now pay up. That the AHA is a small non-profit publisher rather than a commercial giant doesn’t make this question go away.

There is no doubt that professional societies outside of the sciences are in a horrible bind between the drive toward open access and the need for sustainability. But history tells us that no institution has the privilege of remaining static. The American Historical Association can tinker with payments for the AHR as much as it likes under the assumption that the future will be like the past, just with a different spreadsheet. I’d like to see the AHA be bolder—supportive not only of its flagship but of the entire fleet, which now includes fledgling open access journals, blogs, and other nascent online genres.

Mostly, I’d like to see a statement that doesn’t read like this one does: anxious and reactive. I’d like to see a statement that says: “We stand ready to nurture and support historical scholarship whenever and wherever it might arise.”

Normal Science and Abnormal Publishing

When the Large Hadron Collider locates its elusive quarry under the sofa cushion of the universe, Nature will be there to herald the news of the new particle and the scientists who found it. But below these headline-worthy discoveries, something fascinating is going on in science publishing: the race, prompted by the hugely successful PLoS ONE and inspired by the earlier revolution of arXiv.org, to provide open access outlets for any article that is technically sound, without trying to assess impact ahead of time. These outlets are growing rapidly and are likely to represent a significant percentage of published science in the years ahead.

Last week the former head of PLoS ONE announced a new company and a new journal, PeerJ, that takes the concept one step further, providing an all-you-can-publish buffet for a minimal lifetime fee. And this week saw the launch of Scholastica, which will publish a peer-reviewed article for a mere $10. (Scholastica is accepting articles in all fields, but I suspect it will be used mostly by scientists used to this model.) As stockbrokers would say, it looks like we’re going to test the market bottom.

Yet the economics of this publishing is far less interesting than its inherent philosophy. At a steering committee meeting of the Coalition for Networked Information, the always-shrewd Cliff Lynch summarized a critical mental shift that has occurred: “There’s been a capitulation on the question of importance.” Exactly. Two years ago I wrote about how “scholars have uses for archives that archivists cannot anticipate,” and these new science journals flip that equation from the past into the future: aside from rare and obvious discoveries (the 1%), we can’t tell what will be important in the future, so let’s publish as much as possible (the 99%) and let the community of scholars rather than editors figure that out for themselves.

Lynch noted that capitulation on importance allows for many other kinds of scientific research to come to the fore, such as studies that try to reproduce experiments to ensure their validity and work that fails to prove a scientist’s hypothesis (negative outcomes). When you think about it, traditional publishing encourages a constant stream of breakthroughs, when in reality actual breakthroughs are few and far between. Rather than trumpeting every article as important in a quest to be published, these new venues encourage scientists to publish more of what they find, and in a more honest way. Some of that research may in fact prove broadly important in a field, while other research might simply be helpful for its methodological rigor or underlying data.

As a historian of science, all of this reminds me of Thomas Kuhn’s conception of normal science. Kuhn is of course known for the “paradigm shift,” a notion that, much to Kuhn’s chagrin, has escaped the bounds of his philosophy of science into nearly every field of study (and frequently business seminars as well). But to have a paradigm shift you have to have a paradigm, and just as crucial as the shifting is the not-shifting. Kuhn called this “normal science,” and it represents most of scientific endeavor.

Kuhn famously described normal science as “mopping-up operations,” but that phrase was not meant to be disparaging. “Few people who are not actually practitioners of a mature science,” he wrote in The Structure of Scientific Revolutions, “realize how much mop-up work of this sort a paradigm leaves to be done or quite how fascinating such work can prove in the execution.” Scientists often spend years or decades fleshing out and refining theories, testing them anew, applying them to new evidence and to new areas of a field.

There is nothing wrong with normal science. Indeed, it can be good science. It’s just not often the science that makes headlines. And now it has found a good match in the realm of publishing.

Catching the Good

[Another post in my series on our need to focus more on the “demand side” of scholarly communication—how and why scholars engage with and contribute to publications—in addition to new models for the “supply side”—new production models for publications themselves. If you’re new to this line of thought on my blog, you may wish to start here or here.]

As all parents discover when their children reach the “terrible twos” (a phase that evidently lasts until 18 years of age), it’s incredibly easy to catch your kids being bad, and to criticize them. Kids are constantly pushing boundaries and getting into trouble; it’s part of growing up, intellectually and emotionally. What’s harder for parents, but perhaps far more important, is “catching your child doing good,” to look over when your kid isn’t yelling or pulling the dog’s ear to say, “I like the way you’re doing that.”

Although I fear infantilizing scholars (wags would say that’s perfectly appropriate), whenever I talk about the publishing model at PressForward, I find myself referring back to this principle of “catching the good,” which of course goes by the fancier name of “positive reinforcement” in psychology. What appears in PressForward publications such as Digital Humanities Now isn’t submitted and threatened with criticism and rejection (negative reinforcement). Indeed, there is no submission process at all. Instead, we look to “catch the good” in whatever format, and wherever, it exists (positive reinforcement). Catching the good is not necessarily the final judgment upon a work, but an assessment that something is already quite worthy and might benefit from a wider audience.

It’s a useful exercise to consider the very different psychological modes of positive and negative reinforcement as they relate to scholarly (and non-scholarly) communication, and the kind of behavior these models encourage or suppress. Obviously PressForward has no monopoly on positive reinforcement; catching the good also happens when a sharp editor from a university press hears about a promising young scholar and cultivates her work for publication. And positive reinforcement is deeply imbedded in the open web, where a blog post can either be ignored or reach thousands as a link is propagated by impressed readers.

In modes where negative reinforcement predominates, such as at journals with high rejection rates, scholars are much more hesitant to distribute their work until it is perfect or near-perfect. An aversion to criticism spreads, with both constructive and destructive effects. Authors work harder on publications, but also spend significant energy to tailor their work to please the paren, er, editors and blind reviewers who wait in judgment. Authors internalize the preferences of the academic community they strive to join, and curb experimentation or the desire to reach interdisciplinary or general audiences.

Positive-reinforcement models, especially those that involve open access to content, allow for greater experimentation of form and content. Interdisciplinary and general audiences are more likely to be reached, since a work can be highlighted or linked to by multiple venues at the same time. Authors feel at greater liberty to disseminate more of their work, including material that is half-baked and work that is polished, but audiences may find even the half-baked to be helpful to their thought processes. In other publications that “partial” work might not ever see the light of day.

Finally, just as a kid who constantly strives to be a great baseball player might be unexpectedly told he has a great voice and should try out for the choir, positive reinforcement is more likely to push authors to contribute to fields in which they naturally excel. Positive reinforcement casts a wider net, doing a better job at catching scholars in all stations, or even outsiders, who might have ideas or approaches a discipline could use.

When mulling new outlets for their work, scholars implicitly model risk and reward, imagining the positive and negative reinforcement they will be subjected to. It would be worth talking about this psychology more explicitly. For instance, what if there were a low-risk, but potentially high-reward, outlet that focused more on positive reinforcement—published articles getting noticed and passed around based on merit after a relatively restricted phase of pre-publication criticism? If you want to know why PLoS ONE is the fastest-growing venue for scientific work, that’s the question they asked and successfully answered. And that’s what we’re trying to do with PressForward as well.

[My thanks to Joan Fragazsy Troyano and Mike O’Malley for reading an early version of this post.]

Digital Journalism and Digital Humanities

I’ve increasingly felt that digital journalism and digital humanities are kindred spirits, and that more commerce between the two could be mutually beneficial. That sentiment was confirmed by the extremely positive reaction on Twitter to a brief comment I made on the launch of Knight-Mozilla OpenNews, including from Jon Christensen (of the Bill Lane Center for the American West at Stanford, and formerly a journalist), Shana Kimball (MPublishing, University of Michigan), Tim Carmody (Wired), and Jenna Wortham (New York Times).

Here’s an outline of some of the main areas where digital journalism and digital humanities could profitably collaborate. It’s remarkable, upon reflection, how much overlap there now is, and I suspect these areas will only grow in common importance.

1) Big data, and the best ways to scan and visualize it. All of us are facing either present-day or historical archives of almost unimaginable abundance, and we need sophisticated methods for finding trends, anomalies, and specific documents that could use additional attention. We also require robust ways of presenting this data to audiences to convey theses and supplement narratives.

2) How to involve the public in our work. If confronted by big data, how and when should we use crowdsourcing, and through which mechanisms? Are there areas where pro-am work is especially effective, and how can we heighten its advantages while diminishing its disadvantages? Since we both do work on the open web rather than in the cloistered realms of the ivory tower, what are we to make of the sometimes helpful, sometimes rocky interactions with the public?

3) The narrative plus the archive. Journalists are now writing articles that link to or embed primary sources (e.g., using DocumentCloud). Scholars are now writing articles that link to or embed primary sources (e.g., using Omeka). Formerly hidden sources are now far more accessible to the reader.

4) Software developers and other technologists are our partners. No longer relegated to secondary status as “the techies who make the websites,” we need to work intellectually and practically with those who understand how digital media and technology can advance our agenda and our content. For scholars, this also extends to technologically sophisticated librarians, archivists, and museum professionals. Moreover, the line between developer and journalist/scholar is already blurring, and will blur further.

5) Platforms and infrastructure. We care a great deal about common platforms, ranging from web and data standards, to open source software, to content management systems such as WordPress and Drupal. Developers we work with can create platforms with entirely novel functionality for news and scholarship.

6) Common tools. We are all writers and researchers. When the New York Times produces a WordPress plugin for editing, it affects academics looking to use WordPress as a scholarly communication platform. When our center updates Zotero, it affects many journalists who use that software for organizing their digital research.

7) A convergence of length. I’m convinced that something interesting and important is happening at the confluence of long-form journalism (say, 5,000 words or more) and short-form scholarship (ranging from long blog posts to Kindle Singles geared toward popular audiences). It doesn’t hurt that many journalists writing at this length could very well have been academics in a parallel universe, and vice versa. The prevalence of high-quality writing that is smart and accessible has never been greater.

This list is undoubtedly not comprehensive; please add your thoughts about additional common areas in the comments. It may be worth devoting substantial time to increasing the dialogue between digital journalists and digital humanists at the next THATCamp Prime, or perhaps at a special THATCamp focused on the topic. Let me know if you’re interested. And more soon in this space.

Digital Humanities Now 2.0: Bigger and Better, with a New Review Process

After five months of retooling, we’re relaunching Digital Humanities Now today. As part of this relaunch it has been moved into the PressForward family of publications, as one of that project’s new models of how high-quality work can emerge from, and reach, scholarly communities.

The first iteration of DH Now, which we launched two years ago, relied almost entirely on an automated process to find what digital humanities scholars were talking about and linking to (namely, on Twitter). About a year ago, in an attempt to make the signal-to-noise ratio a bit better, I took my slightly tongue-in-cheek “Editor-in-Chief” role more seriously, vetting each potential item for inclusion and adding better titles and “abstracts.”

Today we take a much larger step forward, in an attempt to find and highlight the best work in digital humanities, and curate it in such a way as to be maximally useful to the scholarly community. The DH Now team, including Joan Fragaszy Troyano, Sasha Boni, and Jeri Wieringa, have corralled a large array of digital humanities content into the base for the publication. Building on a Digital Humanities Registry I set up in the summer, they have located and are now tracking the content streams of hundreds of scholars and institutions (what we’re calling the Compendium of Digital Humanities), from which we can select items for highlighting in the “news” and “Editors’ Choice” columns on the site. As before, social media (including Twitter) and other means for assessing the resonance of scholarly works will serve a role, but not an exclusive one, as we seek out new and important work wherever that work may be found.

The foundation of the editorial model, as I explained in this space on the launch of PressForward, is that instead of a traditional process of submission to a journal that leads to a binary acceptance/rejection decision many months later (and publication many more months or years later), we can begin to think of scholarly communication as a process that begins with open publication on the web and that leads to successive layers of review. Contrary to the concerns of critics, this is far from a stream of unvetted work.

Imagine a pyramid of scholarship. At the bottom is a broad base of scholarship on the open web (which understandably worries many scholars who object to new models of scholarly communication that do not rely on the decisive eye of a paid editor and the scarcity of journal pages). From that base, however, a minority of scholarly works seem worthy of additional attention, and after word of mouth and dissemination of those potentially important pieces, more scholars weigh in, making a work rise or fall. As we move up the pyramid—to more exclusive forms of “publication,” fewer and fewer works survive. Far from lacking peer review, the model we are proposing involves significant winnowing as a scholarly work passes through various levels of review.

For the new DH Now, these levels of publication are transparent on the site, and can be subscribed to individually depending on how unfiltered or filtered scholars would like their stream to be:

• Most people will likely want to subscribe to the main DHNow feed, which will include the Editors’ Choice articles as well as important news items such as jobs, resources, and conferences.

• Those who want full access to the wide base of the scholarly pyramid (or who don’t trust the editorial board’s decisions) can subscribe to the unfiltered Compendium of Digital Humanities, which includes feeds from hundreds of scholars.

• For those who felt that the original DH Now worked well for them, we have maintained a “top tweeted stories” feed.

• Finally, a major new addition is the launch of a quarterly review of the best of the best—the top of the pyramid of review, which will likely contain less than 1% of works that begin at the base. We will notify scholars about potential inclusion, and pass along comments and suggestions for improvement before publication. We hope and expect that inclusion in this journal form of DH Now will be worthy of inclusion on CVs, in promotion and tenure decisions, and other areas helpful to digital humanities scholars. DH Now will have an ISSN, an editorial board, and all of the other signifiers of quality and peer review that individuals and institutions expect.

You can read more about our process on DH Now‘s “How This Works” page.

We believe this new format has several critical benefits. First, it democratizes scholarly communication in a helpful way. Over the last two years, for instance, DH Now has highlighted up-and-coming work by promising graduate students simply because they chose to post their ideas to a new blog or institutional website. Second, it democratizes the editorial process while still taking into account the scarcity of attention and without sacrificing quality. Although we have a managing group of editors here at the Roy Rosenzweig Center for History and New Media, we are accounting for the views and criticisms of a much broader circle of scholars to make decisions about inclusion and exclusion, and those decisions themselves can be reviewed. Third, DH Now broadens the definition of what scholarship is, by highlighting forms beyond the traditional article. Finally, it encourages open access publishing, which we think has an ethical benefit as well as a reputational benefit to the scholars who post their work online.

The Ivory Tower and the Open Web: Burritos, Browsers, and Books

In the summer of 2007, Nate Silver decided to conduct a rigorous assessment of the inexpensive Mexican restaurants in his neighborhood, Chicago’s Wicker Park. Figuring that others might be interested in the results of his study, and that he might be able to use some feedback from an audience, he took his project online.

Silver had no prior experience in such an endeavor. By day he worked as a statistician and writer at Baseball Prospectus—an innovator, to be sure, having created a clever new standard for empirically measuring the value of players, an advanced form of the “sabermetrics” vividly described by Michael Lewis in Moneyball. ((Nate Silver, “Introducing PECOTA,” in Gary Huckabay, Chris Kahrl, Dave Pease et al., eds., Baseball Prospectus 2003 (Dulles, VA: Brassey’s Publishers, 2003): 507-514. Michael Lewis, Moneyball: The Art of Winning an Unfair Game (New York: W. W. Norton & Company, 2004).)) But Silver had no experience as a food critic, nor as a web developer.

In time, his appetite took care of the former and the open web took care of the latter. Silver knit together a variety of free services as the tapestry for his culinary project. He set up a blog, The Burrito Bracket, using Google’s free Blogger web application. Weekly posts consisted of his visits to local restaurants, and the scores (in jalapeños) he awarded in twelve categories.

Home page of Nate Silver’s Burrito Bracket
Ranking system (upper left quadrant)

Being a sports geek, he organized the posts as a series of contests between two restaurants. Satisfying his urge to replicate March Madness, he modified another free application from Google, generally intended to create financial or data spreadsheets, to produce the “bracket” of the blog’s title.

Google Spreadsheets used to create the competition bracket

Like many of the savviest users of the web, Silver started small and improved the site as he went along. For instance, he had started to keep a photographic record of his restaurant visits and decided to share this documentary evidence. So he enlisted the photo-sharing site Flickr, creating an off-the-rack archive to accompany his textual descriptions and numerical scores. On August 15, 2007, he added a map to the site, geolocating each restaurant as he went along and color-coding the winners and losers.

Flickr photo archive for The Burrito Bracket (flickr.com)
Silver’s Google Map of Chicago’s Wicker Park (shaded in purple) with the location of each Mexican restaurant pinpointed

Even with its do-it-yourself enthusiasm and the allure of carne asada, Silver had trouble attracting an audience. He took to Yelp, a popular site for reviewing restaurants to plug The Burrito Bracket, and even thought about creating a Super Burrito Bracket, to cover all of Chicago. ((Frequently Asked Questions, The Burrito Bracket, http://burritobracket.blogspot.com/2007/07/faq.html)) But eventually he abandoned the site following the climactic “Burrito Bowl I.”

With his web skills improved and a presidential election year approaching, Silver decided to try his mathematical approach on that subject instead—”an opportunity for a sort of Moneyball approach to politics,” as he would later put it. ((http://www.journalism.columbia.edu/system/documents/477/original/nate_silver.pdf)) Initially, and with a nod to his obsession with Mexican food, he posted his empirical analyses of politics under the chili-pepper pseudonym “Poblano,” on the liberal website Daily Kos, which hosts blogs for its engaged readers.

Then, in March 2008, Silver registered his own web domain, with a title that was simultaneously and appropriately mathematical and political: fivethirtyeight.com, a reference to the total number of electors in the United States electoral college. He launched the site with a slight one-paragraph post on a recent poll from South Dakota and a summary of other recent polling from around the nation. As with The Burrito Bracket it was a modest start, but one that was modular and extensible. Silver soon added maps and charts to bolster his text.

FiveThirtyEight two months after launch, in May 2008

Nate Silver’s real name and FiveThiryEight didn’t remain obscure for long. His mathematical modeling of the competition between Barack Obama and Hillary Clinton for the Democratic presidential nomination proved strikingly, almost creepily, accurate. Clear-eyed, well-written, statistically rigorous posts began to be passed from browsers to BlackBerries, from bloggers to political junkies to Beltway insiders. From those wired early subscribers to his site, Silver found an increasingly large audience of those looking for data-driven, deeply researched analysis rather than the conventional reporting that presented political forecasting as more art than science.

FiveThiryEight went from just 800 visitors a day in its first month to a daily audience of 600,000 by October 2008. ((Adam Sternbergh, The Spreadsheet Psychic, New York, Oct 12, 2008, http://nymag.com/news/features/51170/)) On election day, FiveThiryEight received a remarkable 3 
million 
visitors, more than most daily newspapers
. ((http://www.journalism.columbia.edu/system/documents/477/original/nate_silver.pdf))

All of this attention for a site that most media coverage still called, with a hint of deprecation, a “blog,” or “aggregator” of polls, despite Silver’s rather obvious, if latent, journalistic skills. (Indeed, one of his roads not taken had been an offer, straight out of college, to become an assistant at The Washington Post. ((http://www.journalism.columbia.edu/system/documents/477/original/nate_silver.pdf)) ) An article in the Colorado Daily on the emergent genre represented by FiveThirtyEight led with Ken Bickers, professor and chair of the political science department at the University of Colorado, saying that such sites were a new form of “quality blogs” (rather than, evidently, the uniformly second-rate blogs that had previously existed). The article then swerved into much more ominous territory, asking whether reading FiveThirtyEight and similar blogs was potentially dangerous, especially compared to the safe environs of the traditional newspaper. Surely these sites were superficial, and they very well might have a negative effect on their audience:

Mary Coussons-Read, a professor of psychology at CU Denver, says today’s quick turnaround of information helps to make it more compelling.

“Information travels so much more quickly,” she says. “(We expect) instant gratification. If people have a question, they want an answer.”

That real-time quality can bring with it the illusion that it’s possible to perceive a whole reality by accessing various bits of information.

“There’s this immediacy of the transfer of information that leads people to believe they’re seeing everything … and that they have an understanding of the meaning of it all,” she says.

And, Coussons-Read adds, there is pleasure in processing information.

“I sometimes feel like it’s almost a recreational activity and less of an information-gathering activity,” she says.

Is it addiction?

[Michele] Wolf says there is something addicting about all that data.

“I do feel some kind of high getting new information and being able to process it,” she says. “I’m also a rock climber. I think there are some characteristics that are shared. My addiction just happens to be information.”

While there’s no such mental-health diagnosis as political addiction, Jeanne White, chemical dependency counselor at Centennial Peaks Hospital in Louisville, says political information seeking could be considered an addictive process if it reaches an extreme. ((Cindy Sutter, “Hooked on information: Can political news really be addicting?” The Colorado Daily, November 3, 2008, http://www.coloradodaily.com/ci_13105998))

This stereotype of blogs as the locus of “information” rather than knowledge, of “recreation” rather than education, was—and is—a common one, despite the wide variety of blogs, including many with long-form, erudite writing. Perhaps in 2008 such a characterization of FiveThirtyEight was unsurprising given that Silver’s only other credits to date were the Player Empirical Comparison and Optimization Test Algorithm (PECOTA) and The Burrito Bracket. Clearly, however, here was an intelligent researcher who had set his mind on a new topic to write about, with a fresh, insightful approach to the material. All he needed was a way to disseminate his findings. His audience appreciated his extraordinarily clever methods—at heart, academic techniques—for cutting through the mythologies and inadequacies of standard political commentary. All they needed was a web browser to find him.

A few journalists saw past the prevailing bias against non-traditional outlets like FiveThirtyEight. In the spring of 2010, Nate Silver bumped into Gerald Marzorati, the editor of the New York Times Magazine, on a train platform in Boston. They struck up a conversation, which eventually turned into a discussion about how FiveThirtyEight might fit into the universe of the Times, which ultimately recognized the excellence of his work and wanted FiveThirtyEight to enhance their political reporting and commentary. That summer, a little more than two years after he had started FiveThirtyEight, Silver’s “blog” merged into the Times under a licensing deal. ((Nate Silver, “FiveThirtyEight to Partner with New York Times, http://www.fivethirtyeight.com/2010/06/fivethirtyeight-to-partner-with-new.html)) In less time than it takes for most students to earn a journalism degree, Silver had willed himself into writing for one of the world’s premier news outlets, taking a seat in the top tier of political analysis. A radically democratic medium had enabled him to do all of this, without the permission of any gatekeeper.

FiveThirtyEight on the New York Times website, 2010

* * *

The story of Nate Silver and FiveThirtyEight has many important lessons for academia, all stemming from the affordances of the open web. His efforts show the do-it-yourself nature of much of the most innovative work on the web, and how one can iterate toward perfection rather than publishing works in fully polished states. His tale underlines the principle that good is good, and that the web is extraordinarily proficient at finding and disseminating the best work, often through continual, post-publication, recursive review. FiveThirtyEight also shows the power of openness to foster that dissemination and the dialogue between author and audience. Finally, the open web enables and rewards unexpected uses and genres.

Undoubtedly it is true that the path from The Burrito Bracket to The New York Times may only be navigated by an exceptionally capable and smart individual. But the tools for replicating Silver’s work are just as open to anyone, and just as powerful. It was with that belief, and the desire to encourage other academics to take advantage of the open web, that Roy Rosenzweig and I wrote Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web. ((Daniel J. Cohen and Roy Rosenzweig, Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web (University of Pennsylvania Press, 2006).)) We knew that the web, although fifteen years old at the time, was still somewhat alien to many professors, graduate students, and even undergraduates (who might be proficient at texting but know nothing about HTML), and we wanted to make the medium more familiar and approachable.

What we did not anticipate was another kind of resistance to the web, based not on an unfamiliarity with the digital realm or on Luddism but on the remarkable inertia of traditional academic methods and genres—the more subtle and widespread biases that hinder the academy’s adoption of new media. These prejudices are less comical, and more deep-seated, than newspapers’ penchant for tales of internet addiction. This resistance has less to do with the tools of the web and more to do with the web’s culture. It was not enough for us to conclude Digital History by saying how wonderful the openness of the web was; for many academics, this openness was part of the problem, a sign that it might be like “playing tennis with the net down,” as my graduate school mentor worriedly wrote to me. ((http://www.dancohen.org/2010/11/11/frank-turner-on-the-future-of-peer-review/))

In some respects, this opposition to the maximal use of the web is understandable. Almost by definition, academics have gotten to where they are by playing a highly scripted game extremely well. That means understanding and following self-reinforcing rules for success. For instance, in history and the humanities at most universities in the United States, there is a vertically integrated industry of monographs, beginning with the dissertation in graduate school—a proto-monograph—followed by the revisions to that work and the publication of it as a book to get tenure, followed by a second book to reach full professor status. Although we are beginning to see a slight liberalization of rules surrounding dissertations—in some places dissertations could be a series of essays or have digital components—graduate students infer that they would best be served on the job market by a traditional, analog monograph.

We thus find ourselves in a situation, now more than two decades into the era of the web, where the use of the medium in academia is modest, at best. Most academic journals have moved online but simply mimic their print editions, providing PDF facsimiles for download and having none of the functionality common to websites, such as venues for discussion. They are also largely gated, resistant not only to access by the general public but also to the coin of the web realm: the link. Similarly, when the Association of American University Presses recently asked its members about their digital publishing strategies, the presses tellingly remained steadfast in their fixation on the monograph. All of the top responses were about print-on-demand and the electronic distribution and discovery of their list, with a mere footnote for a smattering of efforts to host “databases, wikis, or blogs.” ((Association of American University Presses, “Digital Publishing in the AAUP Community; Survey Report: Winter 2009-2010,” http://aaupnet.org/resources/reports/0910digitalsurvey.pdf, p. 2)) In other words, the AAUP members see themselves almost exclusively as book publishers, not as publishers of academic work in whatever form that may take. Surveys of faculty show comfort with decades-old software like word processors but an aversion to recent digital tools and methods. ((See, for example, Robert B. Townsend, “How Is New Media Reshaping the Work of Historians?”, Perspectives on History, November 2010, http://www.historians.org/Perspectives/issues/2010/1011/1011pro2.cfm)) The professoriate may be more liberal politically than the most latte-filled ZIP code in San Francisco, but we are an extraordinarily conservative bunch when in comes to the progression and presentation of our own work. We have done far less than we should have by this point in imagining and enacting what academic work and communication might look like if it was digital first.

To be sure, as William Gibson has famously proclaimed, “The future is already here—it’s just not very evenly distributed.” ((National Public Radio, “Talk of the Nation” radio program, 30 November 1999, timecode 11:55, http://discover.npr.org/features/feature.jhtml?wfId=1067220)) Almost immediately following the advent of the web, which came out of the realm of physics, physicists began using the Los Alamos National Laboratory preprint server (later renamed ArXiv and moved to arXiv.org) to distribute scholarship directly to each other. Blogging has taken hold in some precincts of the academy, such as law and economics, and many in those disciplines rely on web-only outlets such as the Social Science Research Network. The future has had more trouble reaching the humanities, and perhaps this book is aimed slightly more at that side of campus than the science quad. But even among the early adopters, a conservatism reigns. For instance, one of the most prominent academic bloggers, the economist Tyler Cowen, still recommends to students a very traditional path for their own work. ((“Tyler Cowen: Academic Publishing,” remarks at the Institute for Humane Studies Summer Research Fellowship weekend seminar, May 2011, http://vimeo.com/24124436)) And far from being preferred by a large majority of faculty, quests to open scholarship to the general public often meet with skepticism. ((Open access mandates have been tough sells on many campuses, passing only by slight majorities or failing entirely. For instance, such a mandate was voted down at the University of Maryland, with evidence of confusion and ambivalence. http://scholarlykitchen.sspnet.org/2009/04/28/umaryland-faculty-vote-no-oa/))

If Digital History was about the mechanisms for moving academic work online, this book is about how the digital-first culture of the web might become more widespread and acceptable to the professoriate and their students. It is, by necessity, slightly more polemical than Digital History, since it takes direct aim at the conservatism of the academy that twenty years of the web have laid bare. But the web and the academy are not doomed to an inevitable clash of cultures. Viewed properly, the open web is perfectly in line with the fundamental academic goals of research, sharing of knowledge, and meritocracy. This book—and it is a book rather than a blog or stream of tweets because pragmatically that is the best way to reach its intended audience of the hesitant rather than preaching to the online choir—looks at several core academic values and asks how we can best pursue them in a digital age.

First, it points to the critical academic ability to look at any genre without bias and asks whether we might be violating that principle with respect to the web. Upon reflection many of the best things we discover in scholarship are found by disregarding popularity and packaging, by approaching creative works without prejudice. We wouldn’t think much of the meandering novel Moby-Dick if Carl Van Doren hadn’t looked past decades of mixed reviews to find the genius in Melville’s writing. Art historians have similarly unearthed talented artists who did their work outside of the royal academies and the prominent schools of practice. As the unpretentious wine writer Alexis Lichine shrewdly said in the face of fancy labels and appeals to mythical “terroir”: “There is no substitute for pulling corks.” ((Quoted in Frank J. Prial, “Wine Talk,” New York Times, 17 August 1994, http://www.nytimes.com/1994/08/17/garden/wine-talk-983519.html.))

Good is good, no matter the venue of publication or what the crowd thinks. Scholars surely understand that on a deep level, yet many persist in the valuing venue and medium over the content itself. This is especially true at crucial moments, such as promotion and tenure. Surely we can reorient ourselves to our true core value—to honor creativity and quality—which will still guide us to many traditionally published works but will also allow us to consider works in some nontraditional venues such as new open access journals or articles written and posted on a personal website or institutional repository, or digital projects.

The genre of the blog has been especially cursed by this lack of open-mindedness from the academy. Chapter 1, “What is a Blog?”, looks at the history of the blog and blogging, the anatomy and culture of a genre that is in many ways most representative of the open web. Saddled with an early characterization as being the locus of inane, narcissistic writing, the blog has had trouble making real inroads in academia, even though it is an extraordinarily flexible form and the perfect venue for a great deal of academic work. The chapter highlights some of the best examples of academic blogging and how they shape and advance arguments in a field. We can be more creative in thinking about the role of the blog within the academy, as a venue for communicating our work to colleagues as well as to a lay audience beyond the ivory tower.

This academic prejudice against the blog extends to other genres that have proliferated on the open web. Chapter 2, “Genres and the Open Web,” examines the incredible variety of those new forms, and how, with a careful eye, we might be able to import some of them profitably into the academy. Some of these genres, like the wiki, are well-known (thanks to Wikipedia, which academics have come to accept begrudgingly in the last five years). Other genres are rarer but take maximal advantage of the latitude of the open web: its malleability and interactivity. Rather than imposing the genres we know on the web—as we do when we post PDFs of print-first journal articles—we would do well to understand and adopt the web’s native genres, where helpful to scholarly pursuits.

But what of our academic interest in validity and excellence, enshrined in our peer review system? Chapter 3, “Good is Good,” examines the fundamental requirements of any such system: the necessity of highlighting only a minority of the total scholarly output, based on community standards, and of disseminating that minority of work to communities of thought and practice. The chapter compares print-age forms of vetting with native web forms of assessment and review, and proposes ways that digital methods can supplement—or even replace—our traditional modes of peer review.

“The Value, and Values, of Openness,” Chapter 4, broadly examines the nature of the web’s openness. Oddly, this openness is both the easiest trait of the web to understand and its most complex, once one begins to dig deeper. The web’s radical openness not only has led to calls for open access to academic work, which has complicated the traditional models of scholarly publishers and societies; it has also challenged our academic predisposition toward perfectionism—the desire to only publish in a “final” format, purged (as much as possible) of error. Critically, openness has also engendered unexpected uses of online materials—for instance, when Nate Silver refactored poll numbers from the raw data polling agencies posted.

Ultimately, openness is at the core of any academic model that can operate effectively on the web: it provides a way to disseminate our work easily, to assess what has been published, and to point to what’s good and valuable. Openness can naturally lead—indeed, is leading—to a fully functional shadow academic system for scholarly research and communication that exists beyond the more restrictive and inflexible structures of the past.

Video: The Ivory Tower and the Open Web

Here’s the video of my plenary talk “The Ivory Tower and the Open Web,” given at the Coalition for Networked Information meeting in Washington in December, 2010. A general description of the talk:

The web is now over twenty years old, and there is no doubt that the academy has taken advantage of its tremendous potential for disseminating resources and scholarship. But a full accounting of the academic approach to the web shows that compared to the innovative vernacular forms that have flourished over the past two decades, we have been relatively meek in our use of the medium, often preferring to impose traditional ivory tower genres on the web rather than import the open web’s most successful models. For instance, we would rather digitize the journal we know than explore how blogs and social media might supplement or change our scholarly research and communication. What might happen if we reversed that flow and more wholeheartedly embraced the genres of the open web?

I hope the audience for this blog finds it worthy viewing. I enjoyed talking about burrito websites, Layer Tennis, aggregation and curation services, blog networks, Aaron Sorkin’s touchiness, scholarly uses of Twitter, and many other high- and low-brow topics all in one hour. (For some details in the images I put up on the screen, you might want to follow along with this PDF of the slides.) I’ll be expanding on the ideas in this talk in an upcoming book with the same title.

[youtube=http://www.youtube.com/watch?v=yeNjiuw-6gQ&w=480&h=385]

A Conversation with Richard Stallman about Open Access

[An email exchange with Richard Stallman, father of free software, copyleft, GNU, and the GPL, reprinted here in redacted form with Stallman’s permission. Stallman tutors me in the important details of open access and I tutor him in the peculiarities of humanities publishing.]

RS: [Your] posting [“Open Access Publishing and Scholarly Values”] doesn’t specify which definition of “open access” you’re arguing for — but that is a fundamental question.

When the Budapest Declaration defined open access, the crucial condition was that users be free to redistribute copies of the articles.  That is an ethical imperative in its own right, and a requisite for proper and safe archiving of the work.

People paid more attention to the other condition specified in the Budapest Declaration: that the publication site allow access by anyone.  This is a good thing, but need not be explicitly required, because the other condition (freedom to redistribute) will have this as a consequence.  Many universities and labs to set up mirror sites, and everyone will thus have access.

More recently, some have started using a modified definition of “open access” which omits the freedom to redistribute.  As a result, “open access” is no longer a clear rallying point.  I think we should now campaign for “redistributable publication.”

What are your thoughts on this?

DC: I probably should have been clearer in my post that I’m for the maximal access—and distribution—of which you speak. Alas, the situation is actually worse than you imagine, especially in the humanities, where I work, and which is about a decade behind the sciences in open access. Beyond the muddying of the waters through terms like “Green OA” and “Gold OA” is the fact that academic publishing is horribly wrapped up (again, more so in the humanities) with structural problems related to reputation, promotion, and tenure. So my colleagues worry more about truly open publications “counting” vs. publications that are simply open to reading on a commercial publisher’s website. That is why I think the big question is not the licensing or the technology of decentralized publishing, posting and free distribution of papers, etc., but the social realm in which academic publishing sits. I’m working now on pragmatic ways to change that very conservative realm.

Put another way: when software developers write good (open) code, other developers recognize that quality, independent of where the code resides; in humanities publishing, packaging (including the imprimatur of a press, the sense that a work has jumped some (often mythical) peer-review hurdle) counts for too much right now.

RS: [“Green OA” and “Gold OA”] are new to me — can you tell me what they mean?

So my colleagues worry more about truly open publications “counting” vs. publications that are simply open to reading on a commercial publisher’s website.

I don’t understand that sentence.

That is why I think the big question is not the licensing or the technology of decentralized publishing, posting and free distribution of papers, etc., but the social realm in which academic publishing sits.

Ethically speaking, what matters is the license used. That’s what determines whether the publishing is ethical or not. Are you saying that the social realm contains the obstacle to the adoption of ethical publication methods?

Put another way: when software developers write good (open) code, other developers recognize that quality, independent of where the code resides.

Programmers can tell if code is well-written, assuming they are allowed to read it, but how does that relate? Are you saying that in the humanities people often judge work based on where it is published, and have no other way to determine what is good or bad?

DC: Green O[pen] A[ccess] = when a professor deposits her finished article in a university repository after it is published. Theoretically that article will then be available (if people can find the website for the institution’s repository), even if the journal keeps it gated.

Gold OA = when an author pays a journal (often around $1-3K) to make their submission open access. when the journal itself (rather than the repository) is open access; may involve the author paying a submission fee. Still probably doesn’t have a redistribution license, but it’s not behind a publisher’s digital gates.

Counting = counting in the academic promotion and tenure process. Much of the problem here is (I believe misplaced) concern about the effect of open access on one’s career.

Are you saying that the social realm contains the obstacle to the adoption of ethical publication methods?

Correct. And much of it has to do with the meekness of academics (especially in the humanities, bastion of liberalism in most other ways) to challenge the system to create a more ethical publication system, one controlled by the community of scholars rather than commercial publishers who profit from our work.

Are you saying that in the humanities people often judge work based on where it is published, and have no other way to determine what is good or bad?

Amazing as it may sound, many academics do indeed judge a work that way, especially in tenure and promotion processes. There are some departments that actually base promotion and tenure on the number of pages published in the top (mostly gated) journals.

RS: [Terms like “Green OA” and “Gold OA” provides] even more reason to reject the term “open access” and demand redistributable publication.

Maybe some leading scholars could be recruited to start a redistributable journal.  Their names would make it prestigious.

DC: That’s what PLoS did (http://plos.org) in the sciences. Unclear if the model is replicable in the humanities, but I’m trying.

UPDATE: This was an off-hand conversation with Stallman, and my apologies for the quick (and poor) descriptions of a couple of open access options. But I think the many commenters below who are focusing on the fine differences between kinds of OA are missing the central themes of this conversation.

Peer Review and the Most Influential Publications

Thanks to Josh Greenberg, I’ve been mulling over this fascinating paper I missed from last winter about the relative impact of science articles published in three different ways in the Proceedings of the National Academy of Sciences (PNAS). It speaks to the question of how important traditional peer review is, and how we might introduce other modes of scholarly communication and review.

PNAS now allows for three very different modes of article submission:

The majority of papers published in PNAS are submitted directly to the journal and follow the standard peer review process. The editorial board appoints an editor for each Direct submission, who then solicits reviewers. During the review process the authors are blinded to the identities of both the editor and the referees. PNAS refers to this publication method as “Track II”. In addition to the direct submission track, members of the National Academy of Sciences (NAS) are allowed to “Communicate” up to two papers per year for other authors. Here, authors send their paper to the NAS member, who then procures reviews from at least two other researchers and submits the paper and reviews to the PNAS editorial board for approval. As with Direct submissions, authors of Communicated papers are at least in theory blinded to the identity of their reviewers, but not to the identity of the editor. PNAS refers to this publication method as “Track I”. Lastly, NAS members are allowed to “Contribute” as many of their own papers per year as they wish. Here, NAS members choose their own referees, collect at least two reviews, and submit their paper along with the reviews to the PNAS editorial board. Peer review is no longer blind, as the authoring NAS member selects his or her own reviewers. PNAS refers to this publication method as “Track III”… Examining papers published in PNAS provides an opportunity to evaluate how these differences in the submission and peer review process within the same journal affect the impact of the papers finally published. The possibility that impact varies systematically across track has received a great deal of recent attention, particularly in light of the decision by PNAS to discontinue Track I. The citation analysis we now present provides a quantitative treatment of the quality of papers published through each track, a discussion which as hitherto been largely anecdotal in nature.

Here’s the eye-opening conclusion:

The analysis presented here clearly demonstrates variation in impact among papers published using different review processes at PNAS. We find that overall, papers authored by NAS member and Contributed to PNAS are cited significantly less than papers which are Direct submissions. Strikingly, however, we find that the 10% most cited Contributed papers receive significantly more citations than the 10% most cited Direct submissions. Thus the Contributed track seems to yield less influential papers on average, but is more likely produce truly exceptional papers. [emphasis mine]

I suspect this will hold true for many new kinds of scholarly communication that are liberated from traditional peer review. Due to their more open and freewheeling nature, these genres, like blogging, will undoubtedly contain much dreck, and thus be negatively stereotyped by many in the professoriate, who (as I have noted in this space) are inordinately conservative when in comes to scholarly communication. But in that sea of nontraditionally reviewed material will be many of the most creative and influential publications. I’m willing to bet this pattern will be even more pronounced in the humanities, where traditional peer review is particularly adept at homogenizing scholarly work.

Just a thought for Open Access Week.

Emerging Genres in Scholarly Communication

If you haven’t read it already, I strongly recommend the recently released report from the eighth annual Scholarly Communication Institute, which tackled emerging genres in scholarly communication.

Current print-based models of scholarly production, assessment, and publication have proven insufficient to meet the demands of scholars and students in the twenty-first century. In the humanities, what literary scholar James Chandler calls “the predominating tenure genres” of monograph and journal articles find themselves under assault from a perfect storm of major dislocations affecting higher education. Publishers are struggling to remake business models that are failing. Libraries strain to keep up acquisitions of print materials as the supply of and demand for digital publications escalate. The reliance of faculty on tenure and review models tied to endangered print genres leads to the disregard of innovation and new methodologies. And mobile, digitally fluent students entering undergraduate and graduate schools are at risk of alienation from the historic core of humanistic inquiry, constrained by outmoded regimes of creation and access.

The goal of SCI 8 was to reimagine the ecology of scholarly publishing, based on careful assessment of new genres, behaviors, and modes of working that have strongly emerged. The Institute focused on new genres in humanities scholarship because they are leading indicators of an information ecosystem that centers around digital evidence, digital authorship, digital dissemination, and digital use.

A must-read.