Author: Dan Cohen

The Social Contract of Scholarly Publishing

When Roy Rosenzweig and I finished writing a full draft of our book Digital History, we sat down at a table and looked at the stack of printouts.

“So, what now?” I said to Roy naively. “Couldn’t we just publish what we have on the web with the click of a button? What value does the gap between this stack and the finished product have? Isn’t it 95% done? What’s the last five percent for?”

We stared at the stack some more.

Roy finally broke the silence, explaining the magic of the last stage of scholarly production between the final draft and the published book: “What happens now is the creation of the social contract between the authors and the readers. We agree to spend considerable time ridding the manuscript of minor errors, and the press spends additional time on other corrections and layout, and readers respond to these signals—a lack of typos, nicely formatted footnotes, a bibliography, specialized fonts, and a high-quality physical presentation—by agreeing to give the book a serious read.”

I have frequently replayed that conversation in my mind, wondering about the constitution of this social contract in scholarly publishing, which is deeply related to questions of academic value and reward.

For the ease of conversation, let’s call the two sides of the social contract of scholarly publishing the supply side and the demand side. The supply side is the creation of scholarly works, including writing, peer review, editing, and the form of publication. The demand side is much more elusive—the mental state of the audience that leads them to “buy” what the supply side has produced. In order for the social contract to work, for engaged reading to happen and for credit to be given to the author (or editor of a scholarly collection), both sides need to be aligned properly.

The social contract of the book is profoundly entrenched and powerful—almost mythological—especially in the humanities. As John Updike put it in his diatribe against the digital (and most humanities scholars and tenure committees would still agree), “The printed, bound and paid-for book was—still is, for the moment—more exacting, more demanding, of its producer and consumer both. It is the site of an encounter, in silence, of two minds, one following in the other’s steps but invited to imagine, to argue, to concur on a level of reflection beyond that of personal encounter, with all its merely social conventions, its merciful padding of blather and mutual forgiveness.”

As academic projects have experimented with the web over the past two decades we have seen intense thinking about the supply side. Robust academic work has been reenvisioned in many ways: as topical portals, interactive maps, deep textual databases, new kinds of presses, primary source collections, and even software. Most of these projects strive to reproduce the magic of the traditional social contract of the book, even as they experiment with form.

The demand side, however, has languished. Far fewer efforts have been made to influence the mental state of the scholarly audience. The unspoken assumption is that the reader is more or less unchangeable in this respect, only able to respond to, and validate, works that have the traditional marks of the social contract: having survived a strong filtering process, near-perfect copyediting, the imprimatur of a press.

We need to work much more on the demand side if we want to move the social contract forward into the digital age. Despite Updike’s ode to the book, there are social conventions surrounding print that are worth challenging. Much of the reputational analysis that occurs in the professional humanities relies on cues beyond the scholarly content itself. The act of scanning a CV is an act fraught with these conventions.

Can we change the views of humanities scholars so that they may accept, as some legal scholars already do, the great blog post as being as influential as the great law review article? Can we get humanities faculty, as many tenured economists already do, to publish more in open access journals? Can we accomplish the humanities equivalent of FiveThirtyEight.com, which provides as good, if not better, in-depth political analysis than most newspapers, earning the grudging respect of journalists and political theorists? Can we get our colleagues to recognize outstanding academic work wherever and however it is published?

I believe that to do so, we may have to think less like humanities scholars and more like social scientists. Behavioral economists know that although the perception of value can come from the intrinsic worth of the good itself (e.g., the quality of a wine, already rather subjective), it is often influenced by many other factors, such as price and packaging (the wine bottle, how the wine is presented for tasting). These elements trigger a reaction based on stereotypes—if it’s expensive and looks well-wrapped, it must be valuable. The book and article have an abundance of these value triggers from generations of use, but we are just beginning to understand equivalent value triggers online—thus the critical importance of web design, and why the logo of a trusted institution or a university press can still matter greatly, even if it appears on a website rather than a book.

Social psychologists have also thought deeply about the potent grip of these idols of our tribe. They are aware of how cultural norms establish and propagate themselves, and tell us how the imposition of limits creates hierarchies of recognition. Thinking in their way, along with the way the web works, one potential solution on the demand side might come not from the scarcity of production, as it did in a print world, but from the scarcity of attention. That is, value will be perceived in any community-accepted process that narrows the seemingly limitless texts to read or websites to view. Curation becomes more important than publication once publication ceases to be limited.

[image credit: Priki]

TEDxNYED

This weekend I’ll be one of the speakers at TEDxNYED, a conference examining the role of new media and technology in shaping the future of education. Other speakers include Lawrence Lessig, the Harvard legal scholar who has written on—and more importantly, acted on—the impact of digital technology on copyright; Jay Rosen, NYU journalism professor who is a powerful critic of traditional “savvy” journalism and advocate for decentralized citizen journalism (and who, in my opinion, is the academic currently using Twitter most effectively);  Jeff Jarvis, author of What Would Google Do? and professor at the City University of New York’s Graduate School of Journalism; Gina Bianchini, CEO of the social network app Ning; USC media scholar Henry Jenkins; KSU cultural anthropologist Michael Wesch, who is well-known for making new media comprehensible through sharp videos; and others working in digital education I’ve wanted to meet.

You can watch the proceedings live on the conference website from 10a-6p EST on Saturday, March 6, 2010. I’ll be on at 4:30p. The title of my talk is “The Last Digit of Pi.” (No, there is no last digit of pi. It’s what they call a “teaser.”)

Digital Campus #52 – What’s the Buzz?

The flawed launch of Google Buzz, with its privacy nightmare of exposing the social graph of one’s email account, makes me, Tom, Mills, and Amanda French consider the major issue of online privacy on this week’s Digital Campus podcast. Covering several stories, including Facebook attacks on teachers and teachers spying on students, we think about the ways in which technology enables new kinds of violations on campus—and what we should do about it. [Subscribe to this podcast.]

Digital Campus Podcasts #46-51

For the past few months I’ve neglected to reblog in this space the availability of fresh new Digital Campus podcasts for your listening pleasure. Below is a list of the major topics of each of those episodes—if you’re new to the podcast, pick one that sounds interesting and give it a listen. Or just subscribe to the podcast to have fresh episodes delivered automatically to iTunes or your favorite podcatcher.

Important changes have arrived in this span of podcasts as well. After being the “show runner” for the first fifty episodes (doing the voice-overs and guiding the discussion in my best impression of a late-night jazz host), the other regulars on the podcast, Tom Scheinfeldt and Mills Kelly, will assume these duties (along with me) on a rotating basis starting with Digital Campus #51, “The Inevitable iPad.” In addition, we’ve been joined by a rotation of “irregulars” who greatly liven up the proceedings and actually have intelligent things to say.

Episode 51 – The Inevitable iPad: Inevitably, we obsess over what the iPad means for academia, museums, and libraries.

Episode 50 – The Crystal Ball Returns: Our popular year-end/beginning-of-the-year wrap-up and predictions of what’s to come.

Episode 49 – The Twouble with Twecklers: Twitter at academic conferences; speeding up the web.

Episode 48 – Balkanization of the Web?: The revised Google Books settlement; News Corp. v. Google; Wikipedia in its maturity.

Episode 47 – Publishers Bleakly: As publishing business models erode, we look at new models in their infancy.

Episode 46 – Theremin Dreams: How people adopt new technologies; Nook; Droid.

The PITS and the iPad

The unveiling of Apple’s iPad this week provoked seemingly everyone to prognosticate the future of the device and the future of computing in general. I was instead prodded to revisit the past—specifically, the original design goals for the Mac spelled out by the brilliant (and humorous) Jef Raskin. Just read the principles Raskin lays out in 1979 in “Design Considerations for an Anthropophilic Computer“:

This is an outline for a computer designed for the Person In The Street (or, to abbreviate: the PITS); one that will be truly pleasant to use, that will require the user to do nothing that will threaten his or her perverse delight in being able to say: “I don’t know the first thing about computers,” and one which will be profitable to sell, service and provide software for.

You might think that any number of computers have been designed with these criteria in mind, but not so. Any system which requires a user to ever see the interior, for any reason, does not meet these specifications. There must not be additional ROMS, RAMS, boards or accessories except those that can be understood by the PITS as a separate appliance. For example, an auxiliary printer can be sold, but a parallel interface cannot. As a rule of thumb, if an item does not stand on a table by itself, and if it does not have its own case, or if it does not look like a complete consumer item in [and] of itself, then it is taboo.

If the computer must be opened for any reason other than repair (for which our prospective user must be assumed incompetent) even at the dealer’s, then it does not meet our requirements.

Seeing the guts is taboo. Things in sockets is taboo (unless to make servicing cheaper without imposing too large an initial cost). Billions of keys on the keyboard is taboo. Computerese is taboo. Large manuals, or many of them (large manuals are a sure sign of bad design) is taboo. Self- instructional programs are NOT taboo.

There must not be a plethora of configurations. It is better to offer a variety of case colors than to have variable amounts of memory. It is better to manufacture versions in Early American, Contemporary, and Louis XIV than to have any external wires beyond a power cord.

And you get ten points if you can eliminate the power cord.

Any differences between models that do not have to be documented in a user’s manual are OK. Any other differences are not.

It is most important that a given piece of software will run on any and every computer built to this specification…

It is expected that sales of software will be an important part of the profit strategy for the computer.

It only took 31 years (not especially a long time in the history of technology), but I think the iPad is the device Raskin envisioned (given, as Raskin would have agreed, that “the interior” and “the guts” now includes the software interior/guts as well as the hardware interior/guts).

Fraser Speirs has called the tech community’s negative reaction to the iPad “future shock” (via Daring Fireball); but it’s really the shockwave of the past—the radical vision of computing Raskin and Steve Jobs always had&#8212finally catching up to the present.

Is Google Good for History?

[These are my prepared remarks for a talk I gave at the American Historical Association Annual Meeting, on January 7, 2010, in San Diego. The panel was entitled “Is Google Good for History?” and also featured talks by Paul Duguid of the University of California, Berkeley and Brandon Badger of Google Books. Given my propensity to go rogue, what I actually said likely differed from this text, but it represents my fullest, and, I hope, most evenhanded analysis of Google.]

Is Google good for history? Of course it is. We historians are searchers and sifters of evidence. Google is probably the most powerful tool in human history for doing just that. It has constructed a deceptively simple way to scan billions of documents instantaneously, and it has spent hundreds of millions of dollars of its own money to allow us to read millions of books in our pajamas. Good? How about Great?

But then we historians, like other humanities scholars, are natural-born critics. We can find fault with virtually anything. And this disposition is unsurprisingly exacerbated when a large company, consisting mostly of better-paid graduates from the other side of campus, muscles into our turf. Had Google spent hundreds of millions of dollars to build the Widener Library at Harvard, surely we would have complained about all those steps up to the front entrance.

Partly out of fear and partly out of envy, it’s easy to take shots at Google. While it seems that an obsessive book about Google comes out every other week, where are the volumes of criticism of ProQuest or Elsevier or other large information companies that serve the academic market in troubling ways? These companies, which also provide search services and digital scans, charge universities exorbitant amounts for the privilege of access. They leech money out of library budgets every year that could be going to other, more productive uses.

Google, on the other hand, has given us Google Scholar, Google Books, newspaper archives, and more, often besting commercial offerings while being freely accessible. In this bigger picture, away from the myopic obsession with the Biggest Tech Company of the Moment (remember similar diatribes against IBM, Microsoft?), Google has been very good for history and historians, and one can only hope that they continue to exert pressure on those who provide costly alternatives.

Of course, like many others who feel a special bond with books and our cultural heritage, I wish that the Google Books project was not under the control of a private entity. For years I have called for a public project, or at least a university consortium, to scan books on the scale Google is attempting. I’m envious of France’s recent announcement to spend a billion dollars on public scanning. In addition, the Center for History and New Media has a strong relationship with the Internet Archive to put content in a non-profit environment that will maximize its utility and distribution and make that content truly free, in all senses of the word. I would much rather see Google’s books at the Internet Archive or the Library of Congress. There is some hope that HathiTrust will be this non-Google champion, but they are still relying mostly on Google’s scans. The likelihood of a publicly funded scanning project in the age of Tea Party reactionaries is slim.

* * *

Long-time readers of my blog know that I have not pulled punches when it comes to Google. To this day the biggest spike in readership on my blog was when, very early in Google’s book scanning project, I casually posted a scan of a human hand I found while looking at an edition of Plato. The post ended up on Digg, and since then it has been one of the many examples used by Google’s detractors to show a lack of quality in their library project.

Let’s discuss the quality issues for a moment, since it is one point of obsession within the academy, an obsession I feel is slightly misplaced. Of course Google has some poor scans—as the saying goes, haste makes waste—but I’ve yet to see a scientific survey of the overall percentage of pages that are unreadable or missing (surely a miniscule fraction in my viewing of scores of Victorian books). Regarding metadata errors, as Jon Orwant of Google Books has noted, when you are dealing with a trillion pieces of metadata, you are likely to have millions of errors in need of correction. Let us also not pretend the bibliographical world beyond Google is perfect. Many of the metadata problems with Google Books come from library partners and others outside of Google.

Moreover, Google likely has remedies for many of these inadequacies. Google is constantly improving its OCR and metadata correction capabilities, often in clever ways. For instance, it recently acquired the reCAPTCHA system from Carnegie Mellon, which uses unwitting humans who are logging into online services to transcribe particularly hard or smudged words from old books. They have added a feedback mechanism for users to report poor scans. Truly bad books can be rescanned or replaced by other libraries’ versions. I find myself nonplussed by quality complaints about Google Books that have engineering solutions. That’s what Google does; it solves engineering problems very well.

Indeed, we should recognize (and not without criticism, as I will note momentarily) that at its heart, Google Books is the outcome, like so many things at Google, of a engineering challenge and a series of mathematical problems: How can you scan tens of million books in a decade? It’s easy to say they should do a better job and get all the details right, but if you do the calculations with those key variables, as I assume Brandon and his team have done, you’ll probably see that getting a nearly perfect library scanning project would take a hundred years rather than ten. (That might be a perfectly fine trade-off, but that’s a different argument or a different project.) As in OCR, getting from 99% to 99.9% accuracy would probably take an order of magnitude longer and be an order of magnitude more expensive. That’s the trade-off they have decided to make, and as a company interested in search, where near-100% accuracy is unnecessary, and considering the possibilities for iterating toward perfection from an imperfect first version, it must have been an easy decision to make.

* * *

Google Books is incredibly useful, even with the flaws. Although I was trained at places with large research libraries of Google Books scale, I’m now at an institution that is far more typical of higher ed, with a mere million volumes and few rare works. At places like Mason, Google Books is a savior, enabling research that could once only be done if you got into the right places. I regularly have students discover new topics to study and write about through searches on Google Books. You can only imagine how historical researchers and all students and scholars feel in even less privileged places. Despite its flaws, it will be the the source of much historical scholarship, from around the globe, over the coming decades. It is a tremendous leveler of access to historical resources.

Google is also good for history in that it challenges age-old assumptions about the way we have done history. Before the dawn of massive digitization projects and their equally important indices, we necessarily had to pick and choose from a sea of analog documents. All of that searching and sifting we did, and the particular documents and evidence we chose to write on, were—let’s admit it—prone to many errors. Read it all, we were told in graduate school. But who ever does? We sift through large archives based on intuition; occasionally we even find important evidence by sheer luck. We have sometimes made mountains out of molehills because, well, we only have time to sift through molehills, not mountains. Regardless of our technique, we always leave something out; in an analog world we have rarely been comprehensive.

This widespread problem of anecdotal history, as I have called it, will only get worse. As more documents are scanned and go online, many works of historical scholarship will be exposed as flimsy and haphazard. The existence of modern search technology should push us to improve historical research. It should tell us that our analog, necessarily partial methods have had hidden from us the potential of taking a more comprehensive view, aided by less capricious retrieval mechanisms which, despite what detractors might say, are often more objective than leafing rapidly through paper folios on a time-delimited jaunt to an archive.

In addition, listening to Google may open up new avenues of exploring the past. In my book Equations from God I argued that mathematics was generally considered a divine language in 1800 but was “secularized” in the nineteenth century. Part of my evidence was that mathematical treatises, which often contained religious language in the early nineteenth century, lost such language by the end of the century. By necessity, researching in the pre-Google Books era, my textual evidence was limited—I could only read a certain number of treatises and chose to focus (I’m sure this will sound familiar) on the writings of high-profile mathematicians. The vastness of Google Books for the first time presents the opportunity to do a more comprehensive scan of Victorian mathematical writing for evidence of religious language. This holds true for many historical research projects.

So Google has provided us not only with free research riches but also with a helpful direct challenge to our research methods, for which we should be grateful. Is Google good for history? Of course it is.

* * *

But does that mean that we cannot provide constructive criticism of Google, to make it the best it can be, especially for historians? Of course not. I would like to focus on one serious issue that ripples through many parts of Google Books.

For a company that is a champion of openness, Google remains strangely closed when it comes to Google Books. Google Books seems to operate in ways that are very different from other Google properties, where Google aims to give it all away. For instance, I cannot understand why Google doesn’t make it easier for historians such as myself, who want to do technical analyses of historical books, to download them en masse more easily. If it wanted to, Google could make a portal to download all public domain books tomorrow. I’ve heard the excuses from Googlers: But we’ve spent millions to digitize these books! We’re not going to just give them away! Well, Google has also spent millions on software projects such as Android, Wave, Chrome OS, and the Chrome browser, and they are giving those away. Google’s hesitance with regard to its books project shows that openness goes only so far at Google. I suppose we should understand that; Google is a company, not public library. But that’s not the philanthropic aura they cast around Google Books at its inception or even today, in dramatic op-eds touting the social benefit of Google Books.

In short, complaining about the quality of Google’s scans distracts us from a much larger problem with Google Books. The real problem—especially for those in the digital humanities but increasingly for many others—is that Google Books is only open in the read-a-book-in-my-pajamas way. To be sure, you can download PDFs of many public domain books. But they make it difficult to download the OCRed text from multiple public domain books–what you would need for more sophisticated historical research. And when we move beyond the public domain, Google has pushed for a troubling, restrictive regime for millions of so-called “orphan” books.

I would like to see a settlement that offers greater, not lesser access to those works, in addition to greater availability of what Cliff Lynch has called “computational access” to Google Books, a higher level of access that is less about reading a page image on your computer than applying digital tools to many pages or books at one time to create new knowledge and understanding. This is partially promised in the Google Books settlement, in the form of text-mining research centers, but those centers will be behind a velvet rope and I suspect the casual historian will be unlikely to ever use them. Google has elaborate APIs, or application programming interfaces, for most of its services, yet only the most superficial access to Google Books.

For a company that thrives on openness and the empowerment of users and software developers, Google Books is a puzzlement. With much fanfare, Google has recently launched—evidently out of internal agitation—what it calls a “Data Liberation Front,” to ensure portability of data and openness throughout Google. On dataliberation.org, the website for the front, these Googlers list 25 Google projects and how to maximize their portability and openness—virtually all of the main services at Google. Sadly, Google Books is nowhere to be seen, even though it also includes user-created data, such as the My Library feature, not to mention all of the data—that is, books—that we have all paid for with our tax dollars and tuition. So while the Che Guevaras put up their revolutionary fist on one side of the Googleplex, their colleagues on the other side are working with a circumscribed group of authors and publishers to place messy restrictions onto large swaths of our cultural heritage through a settlement that few in the academy support.

Jon Orwant and Dan Clancy and Brandon Badger have done an admirable job explaining much of the internal process of Google Books. But it still feels removed and alien in way that other Google efforts are not. That is partly because they are lawyered up, and thus hamstrung from responding to some questions academics have, or from instituting more liberal policies and features. The same chutzpah that would lead a company to digitize entire libraries also led it to go too far with in-copyright books, leading to a breakdown with authors and publishers and the flawed settlement we have in front of us today.

We should remember that the reason we are in a settlement now is that Google didn’t have enough chutzpah to take the higher, tougher road—a direct challenge in the courts, the court of public opinion, or the Congress to the intellectual property regime that governs many books and makes them difficult to bring online, even though their authors and publishers are long gone. While Google regularly uses its power to alter markets radically, it has been uncharacteristically meek in attacking head-on this intellectual property tower and its powerful corporate defenders. Had Google taken a stronger stance, historians would have likely been fully behind their efforts, since we too face the annoyances that unbalanced copyright law places on our pedagogical and scholarly use of textual, visual, audio, and video evidence.

I would much rather have historians and Google to work together. While Google as a research tool challenges our traditional historical methods, historians may very well have the ability to challenge and make better what Google does. Historical and humanistic questions are often at the high end of complexity among the engineering challenges Google faces, similar to and even beyond, for instance, machine translation, and Google engineers might learn a great deal from our scholarly practice. Google’s algorithms have been optimized over the last decade to search through the hyperlinked documents of the Web. But those same algorithms falter when faced with the odd challenges of change over centuries and the alienness of the past and old books and documents that historians examine daily.

Because Google Books is the product of engineers, with tremendous talent in computer science but less sense of the history of the book or the book as an object rather than bits, it founders in many respects. Google still has no decent sense of how to rank search results in humanities corpora. Bibliometrics and text mining work poorly on these sources (as opposed to, say, the highly structured scientific papers Google Scholar specializes in). Studying how professional historians rank and sort primary and secondary sources might tell Google a lot, which it could use in turn to help scholars.

Ultimately, the interesting question might not be, Is Google good for history? It might be: Is history good for Google? To both questions, my answer is: Yes.

Digital Humanities Sessions at the 2010 AHA Meeting

Out of hundreds of sessions at the 2010 American Historical Association annual meeting, nine are on digital matters. Nine. I’m on one-third of the sessions. It’s 2010, and academic historians seem to feel that digital media and technology are not worth discussing, and that we can just go on doing what we’ve done, how we’ve done it, for another hundred years. For comparison, the 2009 MLA has three times as many digital humanities panels.

Anyway, the digital sessions (hope to see you there):

Is Google Good for History?

Crossing the Electronic Rubicon: Navigating the Challenges and Opportunities Presented by Archival Records Created and Stored Exclusively in Digital Format

Teaching Sourcing by Bridging Digital Libraries and Electronic Student Assignments

Humanities in the Digital Age, Part 1: Humanities in the Digital Age, Part 1: Digital Poster Session

Humanities in the Digital Age, Part 2: A Hands-On Workshop

Scholarly Publishing and e-Journals

What Becomes of Print in the Digital Age?

Assessing Resources: Analysis and Comment on EDSITEment Lessons in the High School and Undergraduate Classrooms

American Religious Historians Online

Introducing Digital Humanities Now

Do the digital humanities need journals? Although I’m very supportive of the new journals that have launched in the last year, and although I plan to write for them from time to time, there’s something discordant about a nascent field—one so steeped in new technology and new methods of scholarly communication—adopting a format that is struggling in the face of digital media.

I often say to non-digital humanists that every Friday at 5 I know all of the most important books, articles, projects, and news of the week—without the benefit of a journal, a newsletter, or indeed any kind of formal publication by a scholarly society. I pick up this knowledge by osmosis from the people I follow online.

I subscribe to the blogs of everyone working centrally or tangentially to digital humanities. As I have argued from the start, and against the skeptics and traditionalists who thinks blogs can only be narcissistic, half-baked diaries, these outlets are just publishing platforms by another name, and in my area there are an incredible number of substantive ones.

More recently, social media such as Twitter has provided a surprisingly good set of pointers toward worthy materials I should be reading or exploring. (And as happened with blogs five years ago, the critics are now dismissing Twitter as unscholarly, missing the filtering function it somehow generates among so many unfiltered tweets.) I follow as many digital humanists as I can on Twitter, and created a comprehensive list of people in digital humanities. (You can follow me @dancohen.)

For a while I’ve been trying to figure out a way to show this distilled “Friday at 5” view of digital humanities to those new to the field, or those who don’t have time to read many blogs or tweets. This week I saw a tweet from Tom Scheinfeldt (blog|Twitter) (who in turn saw a tweet from James Neal) about a new service called Twittertim.es, which creates a real-time publication consisting of articles highlighted by people you follow on Twitter. I had a thought: what if I combined the activities of several hundred digital humanities scholars with Twittertim.es?

Digital Humanities Now is a new web publication that is the experimental result of this thought. It aggregates thousands of tweets and the hundreds of articles and projects those tweets point to, and boils everything down to the most-discussed items, with commentary from Twitter. A slightly longer discussion of how the publication was created can be found on the DHN “About” page.

Digital Humanities Now home page

Does the process behind DHN work? From the early returns, the algorithms have done fairly well, putting on the front page articles on grading in a digital age, bringing high-speed networking to liberal arts colleges, Google’s law archive search, and (appropriately enough) a talk on how to deal with streams of content given limited attention. Perhaps Digital Humanities Now will show a need for the light touch of a discerning editor. This could certainly be added on top of the raw feed of all interest items (about 50 a day, out of which only 2 or 3 make it into DHN), but I like the automated simplicity of DHN 1.0.

Despite what I’m sure will be some early hiccups, my gut is that some version of this idea could serve as a rather decent new form of publication that focuses the attention of those in a particular field on important new developments and scholarly products. I’m not holding my breath that someday scholars will put an appearance in DHN on their CVs. But as I recently told an audience of executive directors of scholarly societies at an American Council of Learned Societies meeting, if you don’t do something like this, someone else will.

I suppose DHN is a prod to them and others to think about new forms of scholarly validation and attention, beyond the journal. Ultimately, journals will need the digital humanities more than we need them.

Digital Campus #45 – Wave Hello

If you’ve wondered what an academic trying to podcast while on Google Wave might sound like, you need listen no farther than the latest Digital Campus podcast. In addition to an appraisal of Wave, we cover the FTC ruling on bloggers accepting gifts (such as free books from academic presses), the great Kindle-on-campus experiment, and (of course) another update on the Google Books (un)settlement. Joining Tom, Mills, and me is another new irregular, Lisa Spiro. She’s the intelligent one who’s paying attention rather than muttering while watching Google waves go by. [Subscribe to this podcast.]

Workshop on APIs for the Digital Humanities

Longtime readers of this blog may remember that one of my first posts examined the potential role for APIs (application programming interfaces) in the humanities. It’s also been a long-running theme in this space that APIs can play a critical role in digital research and tool-building. So I’m very much looking forward to this weekend’s workshop on APIs for the digital humanities in Toronto sponsored by NiCHE: Network in Canadian History & Environment. Like others, I’ll be tweeting the conference @dancohen using the hashtag #apiworkshop.