Author: Dan Cohen

What’s the Matter with Ebooks?

[As you may have noticed, I haven’t posted to this blog for over a year. I’ve been extraordinarily busy with my new job. But I’m going to make a small effort to reinvigorate this space, adding my thoughts on evolving issues that I’d like to explore without those thoughts being improperly attributed to the Digital Public Library of America. This remains my personal blog, and you should consider these my personal views. I will also be continuing to post on DPLA’s blog, as I have done on this topic of ebooks.]

Over the past two years I’ve been tracking ebook adoption, and the statistics are, frankly, perplexing. After Amazon released the Kindle in 2007, there was a rapid growth in ebook sales and readership, and the iPad’s launch three years later only accelerated the trend.

Then something odd happened. By most media accounts, ebook adoption has plateaued at about a third of the overall book market, and this stall has lasted for over a year now. Some are therefore taking it as a Permanent Law of Reading: There will be electronic books, but there will always be more physical books. Long live print!

I read both e- and print books, and I appreciate the arguments about the native advantages of print. I am a digital subscriber to the New York Times, but every Sunday I also get the printed version. The paper feels expansive, luxuriant. And I do read more of it than the daily paper on my iPad, as many articles catch my eye and the flipping of pages requires me to confront pieces that I might not choose to read based on a square inch of blue-tinged screen. (Also, it’s Sunday. I have more time to read.) Even though I read more ebooks than printed ones at this point, it’s hard not to listen to the heart and join the Permanent Law chorus.

But my mind can’t help but disagree with my heart. Yours should too if you run through a simple mental exercise: jump forward 10 or 20 or 50 years, and you should have a hard time saying that the e-reading technology won’t be much better—perhaps even indistinguishable from print, and that adoption will be widespread. Even today, studies have shown that libraries that have training sessions for patrons with iPads and Kindles see the use of ebooks skyrocket—highlighting that the problem is in part that today’s devices and ebook services are hard to use. Availability of titles, pricing (compared to paperback), DRM, and a balkanization of ebook platforms and devices all dampen adoption as well.

But even the editor of the New York Times understands the changes ahead, despite his love for print:

How long will print be around? At a Loyola University gathering in New Orleans last week, the executive editor [of the Times], Dean Baquet, noted that he “has as much of a romance with print as anyone.” But he also admitted, according to a Times-Picayune report, that “no one thinks there will be a lot of print around in 40 years.”

Forty years is a long time, of course—although it is a short time in the history of the book. The big question is when the changeover will occur—next year, in five years, in Baquet’s 2055?

The tea leaves, even now, are hard to read, but I’ve come to believe that part of this cloudiness is because there’s much more dark reading going on than the stats are showing. Like dark matter, dark reading is the consumption of (e)books that somehow isn’t captured by current forms of measurement.

For instance, usually when you hear about the plateauing of ebook sales, you are actually hearing about the sales of ebooks from major publishers in relation to the sales of print books from those same publishers. That’s a crucial qualification. But sales of ebooks from these publishers is just a fraction of overall e-reading. By other accounts, which try to shine light on ebook adoption by looking at markets like Amazon (which accounts for a scary two-thirds of ebook sales), show that a huge and growing percentage of ebooks are being sold by indie publishers or authors themselves rather than the bigs, and a third of them don’t even have ISBNs, the universal ID used to track most books.

The commercial statistics also fail to account for free e-reading, such as from public libraries, which continues to grow apace. The Digital Public Library of America and other sites and apps have millions of open ebooks, which are never chalked up as a sale.

Similarly, while surveys of the young continue to show their devotion to paper, yet other studies have shown that about half of those under 30 read an ebook in 2013, up from a quarter of Millennials in 2011—and that study is already dated. Indeed, most of the studies that highlight our love for print over digital are several years old (or more) at this point, a period in which large-format, high-resolution smartphone adoption (much better for reading) and new all-you-can-read ebook services, such as Oyster, Scribd, and Kindle Unlimited, have emerged. Nineteen percent of Millennials have already subscribed to one of these services, a number considered low by the American Press Institute, but which strikes me as remarkably high, and yet another contributing factor to the dark reading mystery.

I’m a historian, not a futurist, but I suspect that we’re not going to have to wait anywhere near forty years for ebooks to become predominant, and that the “plateau” is in part a mirage. That may cause some hand-wringing among book traditionalists, an emotion that is understandable: books are treasured artifacts of human expression. But in our praise for print we forget the great virtues of digital formats, especially the ease of distribution and greater access for all—if done right.

Information Overload, Past and Present

The end of this year has seen much handwringing over the stress of information overload: the surging, unending streams, the inexorable decline of longer, more intermittent forms such as blogs, the feeling that our online presence is scattered and unmanageable. This worry spike had me scurrying back to Ann Blair’s terrific history of pre-modern information stress, Too Much to Know. Blair notes how every era has dealt with similar feelings, and how people throughout the ages have come up with different solutions:

These days we are particularly aware of the challenges of information management given the unprecedented explosion of information associated with computers and computer networking…But the perception of and complaints about overload are not unique to our period. Ancient, medieval, and early modern authors and authors working in non-Western contexts articulated similar concerns, notably about the overabundance of books and the frailty of human resources for mastering them (such as memory and time).

The perception of overload is best explained, therefore, not simply as the result of an objective state, but rather as the result of a coincidence of causal factors, including existing tools, cultural or personal expectations, and changes in the quantity of quality of information to be absorbed and managed…But the feeling of overload is often lived by those who experience it as if it were an utterly new phenomenon, as is perhaps characteristic of feelings more generally or of self-perceptions in the modern or postmodern periods especially. Certainly the perception of experiencing overload as unprecedented is dominant today. No doubt we have access to and must cope with a much greater quantity of information than earlier generations on almost every issue, and we use technologies that are subject to frequent change and hence often new.

Blair identifies four “S’s of text management” from the past that we still use today: storing, sorting, selecting, and summarizing. She also notes the history of alternative solutions to information overload that are the equivalent of deleting one’s Twitter account: Descartes and other philosophers, for instance, simply deciding to forget the library so they could start anew. Other to-hell-with-it daydreams proliferated too:

In the eighteenth century a number of writers articulated fantasies of destroying useless books to stem the never-ending accumulation…One critic has identified the articulation of the sublime as another kind of response to overabundance; Kant and Wordsworth are among the authors who described an experience of temporary mental blockage due to “sheer cognitive exhaustion,” whether triggered by sensory or mental overload.

When you ask historians which place and time they would most like to live in, it’s notable that they almost always choose eras and locales with a robust but not overwhelming circulation of ideas and art; just enough newness to chew on, but not too much to choke on; and a pervasive equanimity and thoughtfulness that the internet has not excelled at since the denizens of alt.tasteless invaded rec.pets.cats on Usenet. Jonathan Spence, for instance, imagines a life of moderation, sipping tea and trading considered thoughts in sixteenth-century Hangzhou.

Feels to me like there are many out there grasping for a similar circle of lively friends and deeper discussion as we head into 2014.

 

CC0 (+BY)

Those who have heard me talk about the Digital Public Library of America over the past six months know that I’m fond of saying that DPLA is as much a social project as a technical project. Much of what we do focuses on collaboration and coordination, which involves looking not just at technical—or legal—elements, but social ones.

It’s much easier to think of an issue solely as a technical problem (we just need to figure out how to code that properly), or as a legal problem (we just need to bind everyone under a contractual arrangement to achieve the desired outcome), than as a social issue, since the latter requires attention to more amorphous aspects such as ethics and politics. But being more nuanced about the mix of the social, technical, and legal can pay dividends.

Take DPLA’s metadata. (Please. Take our metadata. It’s all freely available on our site.) One of the questions I frequently get is why the Digital Public Library of America requires the metadata for items in our collection to be donated under a CC0 license. That license is maximally permissive; as its longer name implies, CC0 is in fact a Public Domain Dedication.

Metadata obviously has elements of the technical and legal. Without a stringent technical standard into which we normalize data from over a thousand institutions, and a serious digital infrastructure to transform that metadata into interfaces such as maps and timelines, we couldn’t work much magic. And since we are conscious of the legal realm that many cultural heritage materials exist in, we do ask for a contract that specifies CC0 for the metadata. (However, there are many who would argue that even a CC0 license is unnecessary and should not even be demanded; by its very nature, a purely descriptive set of metadata should not be copyrightable (under U.S. law), but this is a discussion for another day.)

But why not ask for the most modest of additional restrictions, such as a license where attribution is required—a license with a -BY attached to the right? If we wish to tip our hat to those who created or donated the metadata, why not legally mandate it?

Those who use, reuse, and commingle data know the complex issues that arise with even simple additional requirements such as this. Data that flows from many sources will pick up, like fallen branches in the stream, a variety of ensnaring reeds, adding significant friction and complexity to some applications. But good-meaning people still want to provide attribution, and individuals and institutions might have social expectations of receiving credit. What to do?

Move the attribution from the legal realm into the social or ethical realm by pairing a permissive license with a strong moral entreaty.

For instance, the Tate recently released metadata for 70,000 works of art, and 3500 artists. The license they put on the data was CC0. But right next to that license is this block on “Usage Guidelines”:

These usage guidelines are based on goodwill. They are not a legal contract but Tate requests that you follow these guidelines if you use Metadata from our Collection dataset.

The Metadata published by Tate is available free of restrictions under the Creative Commons Zero Public Domain Dedication.

This means that you can use it for any purpose without having to give attribution. However, Tate requests that you actively acknowledge and give attribution to Tate wherever possible. Attribution supports future efforts to release other data. It also reduces the amount of ‘orphaned data’, helping retain links to authoritative sources.

As with many other things, our friends from Europeana were out in front on this, as Tate acknowledges on their GitHub page. Here’s Europeana’s metadata use page:

These usage guidelines are based on goodwill, they are not a legal contract but Europeana requests that you follow these guidelines if you use metadata from Europeana.

All metadata published by Europeana are available free of restriction under theCreative Commons CC0 1.0 Universal Public Domain Dedication. However, Europeana requests that you actively acknowledge and give attribution to all metadata sources, such as the data providers (being a specific cultural heritage institution) and any data aggregators, including Europeana.

Give credit where credit is due.

DPLA does the same thing with our Data Best Use Practices page.

I have been calling this implied or ethical attribution. Or, if you like short and snappy symbols, think of it as CC0 (+BY) rather than CC-BY (or ODB-BY).

The cynics, of course, will say that bad actors will do bad things with all that open data. But here’s the thing about the open web: bad actors will do bad things, regardless. They will ignore whatever license you have asserted, or use technical means to circumvent your technical lock. And yes, with CC0 commercial entities as well might come and take all of that metadata—but that data includes pointers back to items and scans at libraries, archives, and museums, which are (or should be) in the business of disseminating knowledge as widely as possible. By being free with our metadata, we do not devalue those nonprofit institutions, but rather emphasize more broadly the incredible contents they hold.

The flip side of worries about bad actors is that we underestimate the number of good actors doing the right thing. It has been our experience looking at the many software developers (including commercial ones) who have used our data across the web and in DPLA-powered apps, for instance, that they have all maintained proper attribution, even though the CC0 license theoretically means that they can do with the data whatever they want.

I think CCO (+BY) is the best of both worlds: the data in a free-flowing environment that enables creativity and reuse, with attribution still maintained by the vast majority of people who consider themselves part of a social contract.

The Digital Public Library of America, Me, and You

Twenty years ago Roy Rosenzweig imagined a compelling mission for a new institution: “To use digital media and computer technology to democratize history—to incorporate multiple voices, reach diverse audiences, and encourage popular participation in presenting and preserving the past.” I’ve been incredibly lucky to be a part of that mission for over twelve years, at what became the Roy Rosenzweig Center for History and New Media, with last five and a half years as director.

Today I am announcing that I will be leaving the center, and my professorship at George Mason University, the home of RRCHNM, but I am not leaving Roy’s powerful vision behind. Instead, I will be extending his vision—one now shared by so many—on a new national initiative, the Digital Public Library of America. I will be the founding executive director of the DPLA.

The DPLA, which you will be hearing much more about in the coming months, will be connecting the riches of America’s libraries, archives, and museums so that the public can access all of those collections in one place; providing a platform, with an API, for others to build creative and transformative applications upon; and advocating strongly for a public option for reading and research in the twenty-first century. The DPLA will in no way replace the thousands of public libraries that are at the heart of so many communities across this country, but instead will extend their commitment to the public sphere, and provide them with an extraordinary digital attic and the technical infrastructure and services to deliver local cultural heritage materials everywhere in the nation and the world. DPLA_logo The DPLA has been in the planning stages for the last few years, but is about to spin out of Harvard’s Berkman Center for Internet and Society and move from vision to reality. It will officially launch, as an independent nonprofit, on April 18 at the Boston Public Library. I will move to Boston with my family this summer to lead the organization, which will be based there. It is such a great honor to have this opportunity.

Until then I will be transitioning from my role as director of RRCHNM, and my academic life at Mason. Everything at the center will be in great hands, of course; as anyone who visits the center immediately grasps, it is a highly collaborative and nonhierarchical place with an amazing staff and an especially experienced and innovative senior staff. They will continue to shape “the future the past,” as Roy liked to put it. I will miss my good friends at the center, but I still expect to work closely with them, since so many critical software initiatives, educational projects, and digital collections are based at RRCHNM. A search for a new director will begin shortly. I will also greatly miss my colleagues in Mason’s wonderful Department of History and Art History.

At the same time, I look forward to collaborating with new friends, both in the Boston office of the DPLA and across the United States. The DPLA is a unique, special idea—you don’t get to build a massive new library every day. It is apt that the DPLA will launch at the Boston Public Library’s McKim Building, with those potent words carved into stone above its entrance: “Free to all.” The architect Charles Follen McKim rightly called it “a palace for the people,” where anyone could enter to learn, create, and be entertained by the wonders of books and other forms of human expression.

We now have the chance to build something like this for the twenty-first century—a rare, joyous possibility in our too-often cynical age. I hope you will join me in this effort, with your ideas, your contributions, your energy, and your public spirit.

Let’s build the Digital Public Library of America together.

The Other Academy Awards

Two related problems have been bedeviling the current discussion of new modes of academic work. First, it remains unclear to many academics how we can effectively assess digital scholarship, given its many shapes and sizes and often complex, collaborative production. This problem is receiving so much attention right now that we devoted our entire last issue of the Journal of Digital Humanities to it.

Second, given that many of these digital genres—multimedia scholarly sites, sophisticated digital collections, long-form academic blogs, and the like—are published directly to the web, a need has arisen for post-publication, rather than traditional pre-publication, peer review. Last week I was on a panel at the 2013 American Historical Association annual meeting on the future of history journals and peer review, and many in the audience were journal editors who were skeptical about the notion of post-publication peer review. Indeed, for many history and humanities scholars, “post-publication peer review” is an oxymoron; the only true form of peer review is the one that occurs before publication, and that helps to determine in a binary way whether an article or book is published in the first place.

Yet there is an obvious form of post-publication peer review already in wide use—awards—and I would like to suggest that we use them even more widely to help solve the problem of how to assess digital scholarship. As they currently stand, academic awards are icing on the cake. The prizes that the AHA gives every year—mostly books, with a few additional categories for articles, films, reference resources, and lifetime work—are wonderful signifiers of highly distinguished work. To receive one of these prizes is a major career achievement. One of the goals of the Roy Rosenzweig Prize for Innovation in Digital History, awarded by the AHA since 2009, was to validate the very top work in this new field.

One award isn’t nearly enough for our field, however, or for others that increasingly involve digital work. We need to recognize a broad swath of scholarship and innovation using awards, since it’s an effective way to signal creative, good work. Awards can be a clear form of professional validation for digital scholarship that is understandable to everyone in academia (including those outside of digital humanities), and that doesn’t rely on more controversial forms of post-publication peer review such as open review or crowdsourcing. How do we know that a professor’s blog is worthy of significant credit, that it is more than just musings and is having an impact in a field? A certifying scholarly organization or review panel has deemed it so, from a crowded field.

Furthermore, like the Grammys, we need to give awards not just for Record of the Year but for Best Jazz Instrumental and Achievement in Sound Engineering. Knowledgeable review panels with a deep understanding of the composition of digital work should be able to give out both general awards for digital projects and distinct awards, for instance, for outstanding work in user interfaces for digital archives.

Thankfully, there are already initiatives on this front. A new slate of annual Digital Humanities Awards has just launched, with an international review committee. (Nominations due today!) Now we need scholarly societies to back both interdisciplinary and disciplinary awards with their imprimatur. From conversations I’ve had recently, I sense that is likely to happen in the near future.

Of course, we need to be aware of the rather valid objection that awards can be overdone. We should avoid the digital equivalent of an award for Best Zydeco/Metal Duet. (Actually, that sounds incredible.) Last year the Recording Academy sensibly lowered the number of Grammy categories from 109 to 78. Furthermore, it’s important for awards to have some minimum number of entries or nominations every year, and as with book awards, peer review panels must retain the option of giving no award in a year if none of the options are deemed worthy. Awards must be meaningful.

Right now, however, we should think more broadly, rather than narrowly, about giving awards for digital work. There are precious few opportunities for digital projects to receive external validation. I continue to believe that other forms of post-publication peer review are needed as well (especially for developmental editing rather than vetting), but let’s at least start with a larger slate of rigorously determined awards and some (virtual) gold statues.

Visualizing the Uniqueness, and Conformity, of Libraries

Tucked away in a presentation on the HathiTrust Digital Library are some fascinating visualizations of libraries by John Wilkin, the Executive Director of HathiTrust and an Associate University Librarian at the University of Michigan. Although I’ve been following the progress of HathiTrust closely, I missed these charts, and I want to highlight them as a novel method for revealing a library fingerprint or signature using shared metadata.

With access to the catalogs of HathiTrust member libraries, Wilkin ran some comparisons of book holdings. His ingenious idea was not only to count how many libraries held each particular work, but to create a visualization of each member library based on how widely each book in its collection is held by other libraries.

In Wilkin’s graphs for each library, the X axis is the number of libraries containing a book (including the library the visualization represents), and the Y axis is the number of books. That is, it contains columns of books from 1 (the member library is the only one with a particular book) to 41 (every library in HathiTrust has a physical copy of a book). Let’s look at an example:

Reading the chart from left to right, the University of Illinois at Urbana-Champaign library has a small number of books that it alone holds (~1,000), around 25,000 that only one other library has (the “2” column), 36,000 that two other libraries have, etc.

What’s fascinating is that the overall curvature of a graph tells us a great deal about a particular library.

There are three basic types of libraries we can speak of using this visualization technique. First, there are left-leaning libraries, which have a high number of books that do not exist in many other libraries. These libraries have spent considerable effort and resources acquiring rare volumes. For example, Harvard, which has hundreds of thousands of books that only a handful of other libraries also have:

On the other side, there are right-leaning libraries, which consist mostly of books that are nearly universally held by other libraries. These libraries generally carry only the most circulated volumes, books that are expected to be found in any academic research library. For instance, Lafayette College:

Finally, there are rounded libraries, which don’t have many popular books or many rare books, but mostly works that an average number of similar libraries have. These libraries roughly echo their cohort (in this case, large university research libraries in the United States). They could be called—my apologies—well-rounded in their collecting, likely acquiring many scholarly monographs while still remaining selective rather than comprehensive. For instance, Northwestern University:

Of course, the library curve is often highly correlated with the host institution’s age, since older universities are more likely to have rare old books or unusual (e.g., local or regional) books. This correlation is apparent in this sequence of graphs of the University of California schools, from oldest to newest:





Beyond the three basic types, there are interesting anomalies as well. The University of Virginia is, unsurprisingly, a left-leaning library, but not quite as a left-leaning as I would have expected:

Cornell is also left-leaning, but also clearly has a large, idiosyncratic collection containing works that no other library has—note the spike at position “1”:

Moreover, one could imagine using Wilkin Graphs (I’m going to go ahead and name it that to give John full credit) to analyze the relative composition of other kinds of libraries. For instance, LibraryThing has a project called Legacy Libraries, containing the records of personal libraries of famous historical figures such as Thomas Jefferson. A researcher could create Wilkin Graphs for Jefferson and other American founders (in relation to each other), or among intellectuals from the Enlightenment.

Update: Sherman Dorn suggests Wilkin Profile rather than Wilkin Graph. Sure, rolls off the tongue better: Prospective college student on a campus visit asks the tour guide, “So what’s your library’s Wilkin Profile?” According to Constance Malpas, OCLC has created such profiles for 160 libraries. These graphs can be created with the Worldcat Collection Analysis service (which, alas, is not openly available).

Clarification: John Wilkin comments below that the reason for the spike in position 1 in the Cornell Wilkin Profile is that Cornell had a digitization program that added many unique materials to HathiTrust. This made me realize, with some help from Stanford Library’s Chris Bourg and Penn State’s Mike Furlough that the numbers here are only for the shared HathiTrust collection (although that collection is very large—millions of items). Nevertheless, the general profile shapes should hold for more comprehensive datasets, although likely with occasional left and right shifts for certain libraries depending on additional unique book collections that have not been digitized. (That may explain the University of Virginia Wilkin Profile.) Note also that Google influenced the numbers here, since many of the scanned books come from the Google Books (née Google Library) project, introducing some selection bias which is only now being corrected—or worsened?—by individual institutional digitization initiatives, like Cornell’s.

Digital History at the 2013 AHA Meeting

It’s time for my annual list of digital history sessions at the American Historical Association meeting, this year in New Orleans, January 3-6, 2013. This year’s program extends last year’s surging interest in the effect digital media and technology are having on research and the profession. In addition, a special track for the 2013 meeting is entitled “The Public Practice of History in and for a Digital Age.” Looks like a good and varied program, including digital research methods (such as GIS, text mining, and network analysis), the construction and use of digital archives, the history of new media and its impact on social movements, scholarly communication, public history and writing for a general audience on the web, and practical concerns (e.g., getting grants for digital work).

Hope to see some of you there, and to interact with the rest of you about the meeting via other means. (Speaking of which, I hereby declare the hashtag to be #aha13. I know we care about exact dates, fellow historians, but we really don’t need that “20” in our hashtags.)

Thursday, January 3

9am-5pm

THATCamp (The Humanities and Technology Camp) AHA

1-3pm

Henry Morton Stanley, New Orleans, and the Contested Origins of an African Explorer: Public History and Teaching Perspectives

3:30-5:30pm

Spatial Narratives of the Holocaust: GIS, Geo-Visualization, and the Possibilities for Digital Humanities

Presidential Panel: H-Net and the Discipline: Changes and Challenges

8-10pm

Plenary Session: The Public Practice of History in and for a Digital Age

Friday, January 4

8:30-10am

Roundtable on Place in Time: What History and Geography Can Teach Each Other

Public History Meets Digital History in Post-Katrina New Orleans

“To See”: Visualizing Humanistic Data and Discovering Historical Patterns in a Digital Age

Viewfinding: A Discussion of Photography, Landscape, and Historical Memory

Scholarly Societies and Networking through H-Net

H-Net in Asia, Latin America, and the Caribbean: Building New Online Audiences

Applying to NEH Grant Programs

10:30am-noon

Self Defense, Civil Rights, and Scholarship: Panels in Honor of Gwendolyn Midlo Hall , Part 1: Gwendolyn Midlo Hall’s Africans in Colonial Louisiana Twenty Years Later

Online Reviewing: Before and After It Was de Rigueur

Gender, Sexuality, and Ethnicity: Household Space and Lived Experience in Colonial and Early National Mexico

The United States and Its Informants: The Cold War and the War on Terror

2:30-4:30pm

Front Lines: Early-Career Scholars Doing Digital History

From the March on Washington to Tahir Square and Beyond: Tactics, Technology, and Social Movements

Are There Costs to “Internationalizing” History?, Part 2: The Domestic Politics of Teaching and Outreach

Saturday, January 5

9-11am

H-Net in Africa: Building New Online Audiences

Scholarly Communications and Copyright

Oral History and Intellectual History in Conversation: Methodological Innovation in Modern South Asia

Research Support Services for History Scholars: A Study of Evolving Research Methods in History

Comparative Reflections on the History Major Capstone Experience: A Roundtable

The Power of Cartography: Remapping the Black Death in the Age of Genomics and GIS

11:30am-1:30pm

Mapping the Past: Historical Geographic Information Science (GIS)

Beyond “Plan B” for Renaissance Studies: A Roundtable

11:30am-2 – Poster Session 1

Hell Towns, Butternuts, and Spotted Cows: Bringing the History of a Small Town in the Hudson Valley into the Digital Age

2:30-4:30pm

Peer Review, History Journals, and the Future of Scholarly Research

Space, Place, and Time: GIS Technology in Ancient and Medieval European History

Factionalism and Violence across Time and Space: An Exploration of Digital Sources and Methodologies

Connecting Classroom and Community: H-Net Networks and Public History

The Deep History of Africa: New Narrative Approaches

First Steps: Getting Started as a History Professional

Renegotiating Identity: The Process of Democratization in Postauthoritarian Spain and Portugal

2:30-5pm – Poster Session 2

Digital History: Tools and Tricks to Learn the New Trade

Building the Dissertation Digitally

The Global Shipwreck

Picturing a Transnational Pulp Archive

Sunday, January 6

8:30-10:30am

Building a Swiss Army Knife: A Panel on DocTracker, a Multi-Tool for Digital Documentary Editions

11am-1pm

Teaching Digital Methods for History Graduate Students

Public History in the Federal Government: Continuing Trends and New Innovations

Using Oral History for Social Justice Activism

Generous Interfaces for Scholarly Sites

From time to time administrators ask me what I think the home page of a university website should look like. I tell them it should look like the music site The Sixty One, which simply puts a giant photograph of a musician or band in your face, stretched (or shrunk) to the size of your screen:

the_sixty_one_1

Menus are contextual, hidden, and modest; the focus is always on the experience of music. It’s very effective. I am not surprised, however, that university administrators have trouble with this design—what about all of those critical menus and submenus for students, faculty, staff, alumni, parents, visitors, news, views…? Of course, the design idea of a site like The Sixty One is to put engagement before information.

Universities have actually moved slightly in this direction in the past year; many of them now have a one-third slice of the screen devoted to rotating photographs: a scientist swirling blue liquid in a beaker, a string quartet bowing, a circle of students laughing on the grass. (Expect a greater rotational frequency for that classic last image, as it is the most effective anti-MOOC advertising imaginable.) But they still have all of those menus and submenus cluttering up the top and bottom, and news items running down the side, competing for attention. Information before engagement. The same is true for most cultural heritage institutions.

In a break from the normal this fall, the Rijksmuseum went all-in for The Sixty One’s philosophy in their site redesign, which fills the screen with a single image (albeit with a few key links tastefully striped across it):

As effective as it is, engagement-before-information can be an offputting design philosophy for those of us in the scholarly realm. The visual smacks of popularization, as opposed to textually rich, informationally dense designs. Yet we know that engagement can entice us to explore and discover. Home page designs like the Rijksmuseum’s should stimulate further discussion about a more visual mode for scholarly sites.

Take the standard online library catalog. (Please.) Most catalogs show textual search results with plenty of metadata but poor scannability. Full-screen visual browsing—especially using the principle of small multiples, or grids of images—can be very effective as a scholarly research aid, facilitating comparison, discovery, and serendipity.

Oddly enough, one of the first examples I know if this design concept for a research collection comes from the Hard Rock Cafe, which launched a site years ago to display thousands items from its memorabilia archive on a single screen. You can zoom in if something catches your eye—a guitar or handwritten lyrics.

Mitchell Whitelaw of the University of Canberra has been experimenting with similar ideas on his Visible Archive blog. This interface for the Manly Library uses the National Library of Australia’s Trove API to find and display archival documents in a visual-first way:

The images on the search page are categorized by topic (or date) and rotate gently over time without the researcher having to click through ten-items-to-a-page, text-heavy search results. It’s far easier to happen upon items of interest.

Whitelaw has given this model a great name—a “generous interface“:

Collection interfaces dominated by search are stingy, or ungenerous: they don’t provide adequate context, and they demand the user make the first move. By contrast, there seems to be a move towards more open, exploratory and generous ways of presenting collections, building on familiar web conventions and extending them.

I can imagine generous interfaces working extremely well for many other university, library, and museum sites.

Update: Mitchell Whitelaw let me know about another good generous interface he has worked on, Trove Mosaic:

And I should have remembered Tim Sherratt’s “Faces” interface for Invisible Australians:

Trevor Owens connects the generous interface to recent commercial services such as Pinterest. (I would add Flickr’s 2012 redesign.) Thinking about how scholarly generous interfaces are like and unlike these popular websites is important.

DPLA Audience & Participation Workshop and Hackfest at the Center for History and New Media

On December 6, 2012, the Digital Public Library of America will have two concurrent and interwoven events at the Roy Rosenzweig Center for History and New Media at George Mason University in Fairfax, VA. The Audience and Participation workstream will be holding a meeting that will be livestreamed, and next door those interested in fleshing out what might be done with the DPLA will hold a hackfest, which follows on a similar, successful event last month in Chattanooga, TN. (Here are some of the apps that were built.)

Anyone who is interested in experimenting with the DPLA—from creating apps that use the library’s metadata to thinking about novel designs to bringing the collection into classrooms—is welcome to attend or participate from afar. The hackfest is not limited to those with programming skills, and we welcome all those with ideas, notions, or the energy to collaborate in envisioning novel uses for the DPLA.

The Center for History and New Media will provide spaces for a group as large as 30 in the main hacking space, with couches, tables, whiteboards, and unlimited coffee. There will also be breakout areas for smaller groups of designers and developers to brainstorm and work. We ask that anyone who would like to attend the hackfest please register in advance via this registration form.

We anticipate that the Audience and Participation workstream and the hackfest will interact throughout the day, which will begin at 10am and conclude at 5pm EST. Breakfast will be provided at 9am, and lunch at midday.

The Center for History and New Media is on the fourth floor of Research Hall on the Fairfax campus of George Mason University. There is parking across the street in the Shenandoah Parking Garage. (Here are directions and a campus map.)

The Digital Public Library of America: Coming Together

I’m just back from the Digital Public Library of America meeting in Chicago, and like many others I found the experience inspirational. Just two years ago a small group convened at the Radcliffe Institute and came up with a one-sentence sketch for this new library:

An open, distributed network of comprehensive online resources that would draw on the nation’s living heritage from libraries, universities, archives and museums in order to educate, inform and empower everyone in the current and future generations.

In a word: ambitious. Just two short years later, out of the efforts of that steering committee, the workstream members (I’m a convening member of the Audience and Participation workstream), over a thousand people who participated in online discussions and at three national meetings, the tireless efforts of the secretariat, and the critical leadership of Maura Marx and John Palfrey, the DPLA has gone from the drawing board to an impending beta launch in April 2013.

As I was tweeting from the Chicago meeting, distant respondents asked what the DPLA is actually going to be. What follows is what I see as some of its key initial elements, though it will undoubtedly grow substantially. (One worry expressed by many in Chicago was that the website launch in April will be seen as the totality of the DPLA, rather than a promising starting point.)

The primary theme in Chicago is the double-entendre subtitle of this post: coming together. It was clear to everyone at the meeting that the project was reaching fruition, garnering essential support from public funders such as the National Endowment for the Humanities and the Institute of Museum and Library Services, and private foundations such as Sloan, Arcadia, and (most recently) Knight. Just as clear was the idea that what distinguishes the DPLA from—and means it will be complementary to—other libraries (online and off) is its potent combination of local and national efforts, and digital and physical footprints.

Ponds->Lakes->Ocean

The foundation of the DPLA will be a huge store of metadata (and potentially thumbnails), culled from hundreds of sources across America. A large part of the initial collection will come from recently freed metadata about books, videos, audio recordings, images, manuscripts, and maps from large institutions like Harvard, provided under the couldn’t-be-more-permissive CC0 license. Wisely, in my estimation (perhaps colored by the fact that I’m a historian), the DPLA has sought out local archival content that has been digitized but is languishing in places that cannot solicit a large audience, and that do not have the know-how to enable modern web services such as APIs.

As I put it on Twitter, one can think of this initial set of materials (beyond the millions of metadata records from universities) as content from local ponds—small libraries, archives, museums, and historic sites—sent through streams to lakes—state digital libraries, which already exist in 40 states (a surprise to many, I suspect)—and then through rivers to the ocean—the DPLA. The DPLA will run a sophisticated technical infrastructure that will support manifold uses of this aggregation of aggregations.

Plan Nationally, Scan Locally

Since the Roy Rosenzweig Center for History and New Media has worked with many local archives, museums, and historic sites, especially through our Omeka project (which has been selected as the software to run online exhibits for the DPLA), I was aware of the great cultural heritage materials that are out there in this country. The DPLA is right: much of this incredible content is effectively invisible, failing to reach national and international audiences. The DPLA will bring huge new traffic to local scanning efforts. Funding agencies such as the Institute of Museum and Library Services have already provided the resources to scan numerous items at the local level; as IMLS Director Susan Hildreth pointed out, their grant to the DPLA meant that they could bring that already-scanned content to the world—a multiplier effect.

In Chicago we discussed ways of gathering additional local content. My thought was that local libraries can brand a designated computer workstation with the blue DPLA banner, with a scanner and a nice screen showing the cultural riches of the community in slideshow mode. Directions and help will be available to scan in new documents from personal or community collections.

[My very quick mockup of a public library DPLA workstation; underlying Creative Commons photo by Flickr user JennieB]

Others envisioned “Antiques Roadshow”-type events, and Emily Gore, Director of Content at the DPLA, who coined the great term Scannebagos, spoke of mobile scanning units that could digitize content across the country.

The DPLA is not alone in sensing this great unmet need for public libraries and similar institutions to assist communities in the digital preservation of personal and local history. For instance, Bill LeFurgy, who works at the Library of Congress with the National Digital Information Infrastructure and Preservation Program (NDIIPP), recently wrote:

Cultural heritage organizations have a great opportunity to fulfill their mission through what I loosely refer to as personal digital archiving…Cultural heritage institutions, as preserving entities with a public service orientation, are well-positioned to help people deal with their growing–and fragile–personal digital archives. This is a way for institutions to connect with their communities in a new way, and to thrive.

I couldn’t agree more, and although Bill focused mostly on the born-digital materials that we all have in abundance today, this mission of digital preservation can easily extend back to analog artifacts from our past. As the University of Wisconsin’s Dorothea Salo has put it, let’s turn collection development inside out, from centralized organizations to a distributed model.

When Roy and I wrote Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web, we debated the merits of “preservation through digitization.” While it may be problematic for certain kinds of rare materials, there is no doubt that local and personal collections could use this pathway. Given recent (and likely forthcoming) cuts to local archives, this seems even more meritorious.

The Best of the Digital and the Physical

The core strength, and unique feature, of the DPLA is thus that it will bring together the power and reach of the digital realm with the local community and trust in the thousands of American public libraries, museums, and historical sites—an extremely compelling combination. We are going through a difficult transition from print to digital reading, in which people are buying ebooks they cannot share or pass down to their children. The ephemerality of the digital is likely to become increasingly worrisome in this transition. At the same time people are demanding of their local libraries a greater digital engagement.

Ideally the DPLA can help public libraries and vice versa. With a stable, open DPLA combined with on-the-ground libraries, we can begin to articulate a model that protects and makes accessible our cultural heritage through and beyond the digital transition. For the foreseeable future public libraries will continue to house physical materials—the continued wonders of the codex—as well as provide access to the internet for the still significant minority without such access. And the DPLA can serve as a digital attic and distribution center for those libraries.

The key point, made by DPLA board member Laura DeBonis, is that with this physical footprint in communities the DPLA can do things that Google and other dotcoms cannot. She did not mean this as a criticism of Google Books (a project she was involved with when she worked at Google), which has done impressive work in scanning over 20 million books. But the DPLA has an incredible potential local network it can take advantage of to reach out to millions of people and have them share their history—in general, to democratize the access to knowledge.

It is critical to underline this point: the DPLA will be much more than its technical infrastructure. It will succeed or fail not on its web services but on its ability to connect with localities across the United States and have them use—and contribute—to the DPLA.

A Community-Oriented Platform

Having said that, the technical infrastructure is looking solid. But here, too, the Technical Aspects workstream is keeping foremost in their mind community uses. As workstream member David Weinberger has written, we can imagine a future library as a platform, one that serves communities:

In many instances, those communities will be defined geographically, whether it’s a town’s local library or a university community; in some instances, the community will be defined by interest, not by geography. In either case, serving a defined community has two advantages. First, it enables libraries to accomplish the mission they’ve been funded to accomplish. Second, user networks depend upon and assume local knowledge, interests, and norms. While a local library platform should interoperate with the rest of the world’s library platforms, it may do best if it is distinctively local…

Just as each project created by a developer makes it easier for the next developer to create the next app, each interaction by users ought to make the library platform a little smarter, a little wiser, a little more tuned to its users interests. Further, the visible presence of neighbors and the availability of their work will not only make the library an ever more essential piece of the locality’s infrastructure, it can make the local community itself more coherent and humane.

Conceiving of the library as a platform not only opens a range of new services and provides for a continuous increase in the library’s value, it also does something libraries urgently need to do: it changes the criteria of success. A library platform should be measured less on the circulation of its works than in the circulation of the ideas and passions these works spark — from how many works are checked out to the community’s engagement with its own grappling with those works. This is not only a metric that libraries-as-platforms can excel at, it is in fact a measure of what has always been the truest value of libraries.

In that sense, by becoming a platform the library can better fulfill the abiding mission it set itself: to be a civic institution essential to democracy.

Nicely put.

New Uses for Local History

It’s not hard to imagine many apps and sites incorporating the DPLA’s aggregation of local historical content. It struck me that an easy first step is incorporation of the DPLA into existing public library apps. Here in Fairfax, Virginia, our county has an app that is fairly rudimentary but quickly becoming popular because it replaces that library card you can never find. (The app also can alert you to available holds and new titles, and search the catalog.)

I fired up the Fairfax Library app on my phone at the Chicago meeting, and although the county doesn’t know it yet, there’s already a slot for the DPLA in the app. That “local” tab at the bottom can sense where you are and direct you to nearby physical collections; through the DPLA API it will be trivial to also show people digitized items from their community or current locale.

Granted, Fairfax County is affluent and has a well-capitalized public library system that can afford a smartphone app. But my guess is the app is fairly simple and was probably built from a framework other libraries use (indeed, it may be part of Fairfax County’s ILS vendor package), so DPLA integration could happen with many public libraries in this way. For libraries without such resources, I can imagine local hackfests lending a hand, perhaps working from a base app that can be customized for different public libraries easily.

Long-time readers of this blog can identify dozens of other apps that will be hungry for DPLA content. The idea of marrying geolocation with historical materials has flourished in the last two years, with apps like HistoryPin showing how people can find out about the history around them.

Even Google has gotten into the act of location + history with its recently launched Field Trip app. I suspect countless similar projects will be enhanced by, or based on, the DPLA API.

Moreover, geolocating historical documents is but one way to use the technical infrastructure of the DPLA. As the technical working group has wisely noted, the platform exists for unintended uses as well as obvious ones. To explore the many possibilities, there will next be an “Appfest” at the Chattanooga Public Library on November 8-9, 2012. And I’m planning a DPLA hacking session here at the Roy Rosenzweig Center for History and New Media for December 6, 2012, concurrent with an Audience and Participation workstream meeting. Stay tuned for details.

The Speculative

Only hinted at in Chicago, but worthy of greater thought, is what else we might do with the combination of thousands of public libraries and the DPLA. This area is more speculative, for reasons ranging from legal considerations to the changing nature of reading. The strong fair use arguments that won the day in the Authors Guild v. HathiTrust case (the ruling was handed down the day before DPLA Midwest) may—may— enable new kinds of  sharing of digital materials within geofenced areas such as public libraries. (Chicago did not have a report from DPLA’s legal workstream, so we await their understanding of the shifting copyright and fair use landscape in the wake of landmark positive rulings in the HathiTrust and Georgia State cases.)

Perhaps the public library can achieve, in the medium term, some kind of hybrid physical-digital browsability as imagined in this video of a French bookstore from the near future, in which a simple scan of a book using a tablet transfers an e-text to the tablet. The video gets at the ongoing need for in-person reading advice and the superior browsability of physical bookshelves.

I’ve been tracking a number of these speculative exercises, such as the student projects in Harvard Graduate School of Design’s Library Test Kitchen, which experiments with media transformations of libraries. I suspect that bookfuturists will think of other potential physical/digital hybrids.

But we need not get fancy. More obvious benefits abound. The DPLA will be widely used by teachers and students, with scans being placed into syllabi and contextualized by scholars. Judging by the traffic RRCHNM’s educational sites and digital archives get, I expect a huge waiting audience for this. I can also anticipate local groups of readers and historical enthusiasts gathering in person to discuss works from the DPLA.

Momentum, but Much Left to Do

To be sure, many tough challenges still await the DPLA. Largely absent from the discussion in Chicago, with its focus on local history, is the need to see what the digital library can do with books. After all, the majority of circulations from public libraries are popular, in-copyright works, and despite great unique local content the public may expect that P in DPLA to provide a bit more of what they are used to from their local library. Finding ways to have big publishers share at least some books through the system—or perhaps start with smaller publishers willing to experiment with new models of distribution—will be an important piece of the puzzle.

As I noted at the start, the DPLA now has funding from public and private sources, but it will have to raise much, much more, not easy in these austere times. It needs a staff with the energy to match the ambition of the project, and the chops to execute a large digital project that also has in-person connections in 50 states.

A big challenge, indeed. But who wouldn’t like a public, open, digital library that draws from across the United States “to educate, inform and empower everyone”?