Revisiting Mills Kelly’s “Lying About the Past” 10 Years Later

If timing is everything, history professor Mills Kelly didn’t have such great timing for his infamous course “Lying About the Past.” Taught at George Mason University for the first time in 2008, and then again in 2012—both, notably, election years, although now seemingly from a distant era of democracy—the course stirred enormous controversy and then was never taught again in the face of institutional and external objections. Some of those objections understandably remain, but “Lying About the Past” now seems incredibly prescient and relevant.

Unlike other history courses, “Lying About the Past” did not focus on truths about the past, but on historical hoaxes. As a historian of Eastern Europe, Kelly knew a thing or two about how governments and other organizations can shape public opinion through the careful crafting of false, but quite believable, information. Also a digital historian, Kelly understood how modern tools like Photoshop could give even a college student the ability to create historical fakes, and then to disseminate those fakes widely online.

In 2008, students in the course collaborated on a fabricated pirate, Edward Owens, who supposedly roamed the high (or low) seas of the Chesapeake Bay in the 1870s. (In a bit of genius marketing, they called him “The Last American Pirate.”) In 2012, the class made a previously unknown New York City serial killer materialize out of “recently found” newspaper articles and other documents.

It was less the intellectual focus of the course, which was really about the nature of historical truth and the importance of careful research, than the dissemination of the hoaxes themselves that got Kelly and his classes in trouble. In perhaps an impolitic move, the students ended up adding and modifying articles on Wikipedia, and as YouTube recently discovered, you don’t mess with Wikipedia. Although much of the course was dedicated to the ethics of historical fakes, for many who looked at “Lying About the Past,” the public activities of the students crossed an ethical line.

But as we have learned over the last two years, the mechanisms of dissemination are just as important as the fake information being disseminated. A decade ago, Kelly’s students were exploring what became the dark arts of Russian trolls, putting their hoaxes on Twitter and Reddit and seeing the reactive behaviors of gullible forums. They learned a great deal about the circulation of information, especially when bits of fake history and forged documents align with political and cultural communities.

As Yoni Appelbaum, a fellow historian, assessed the outcome of “Lying About the Past” more generously than the pundits who piled on once the course circulated on cable TV:

If there’s a simple lesson in all of this, it’s that hoaxes tend to thrive in communities which exhibit high levels of trust. But on the Internet, where identities are malleable and uncertain, we all might be well advised to err on the side of skepticism.

History unfortunately shows that erring on the side of skepticism has not exactly been a widespread human trait. Indeed, “Lying About the Past” showed the opposite: that those who know just enough history to make plausible, but false, variations in its record, and then know how to push those fakes to the right circles, have the chance to alter history itself.

Maybe it’s a good time to teach some version of “Lying About the Past” again.

Back to the Blog

One of the most-read pieces I’ve written here remains my entreaty “Professors Start Your Blogs,” which is now 12 years old but might as well have been written in the Victorian age. It’s quaint. In 2006, many academics viewed blogs through the lens of LiveJournal and other teen-oriented, oversharing diary sites, and it seemed silly to put more serious words into that space. Of course, as I wrote that blog post encouraging blogging for more grown-up reasons, Facebook and Twitter were ramping up, and all of that teen expression would quickly move to social media.

Then the grown-ups went there, too. It was fun for a while. I met many people through Twitter who became and remain important collaborators and friends. But the salad days of “blog to reflect, tweet to connect” are gone. Long gone. Over the last year, especially, it has seemed much more like “blog to write, tweet to fight.” Moreover, the way that our writing and personal data has been used by social media companies has become more obviously problematic—not that it wasn’t problematic to begin with.

Which is why it’s once again a good time to blog, especially on one’s own domain. I’ve had this little domain of mine for 20 years, and have been writing on it for nearly 15 years. But like so many others, the pace of my blogging has slowed down considerably, from one post a week or more in 2005 to one post a month or less in 2017.

The reasons for this slowdown are many. If I am to cut myself some slack, I’ve taken on increasingly busy professional roles that have given me less time to write at length. I’ve always tried to write substantively on my blog, with posts often going over a thousand words. When I started blogging, I committed to that model of writing here—creating pieces that were more like short essays than informal quick takes.

Unfortunately this high bar made it more attractive to put quick thoughts on Twitter, and amassing a large following there over the last decade (this month marks my ten-year anniversary on Twitter) only made social media more attractive. My story is not uncommon; indeed, it is common, as my RSS reader’s weekly article count will attest.

* * *

There has been a recent movement to “re-decentralize” the web, returning our activities to sites like this one. I am unsurprisingly sympathetic to this as an idealist, and this post is my commitment to renew that ideal. I plan to write more here from now on. However, I’m also a pragmatist, and I feel the re-decentralizers have underestimated what they are up against, which is partially about technology but mostly about human nature.

I’ve already mentioned the relative ease and short amount of time it takes to express oneself on centralized services. People are chronically stretched, and building and maintaining a site, and writing at greater length than one or two sentences seems like real work. When I started this site, I didn’t have two kids and two dogs and a rather busy administrative job. Overestimating the time regular people have to futz with technology was the downfall of desktop linux, and a key reason many people use Facebook as their main outlet for expression rather a personal site.

The technology for self-hosting has undoubtedly gotten much better. When I added a blog to dancohen.org, I wrote my own blogging software, which sounds impressive, but was just some hacked-together PHP and a MySQL database. This site now runs smoothly on WordPress, and there are many great services for hosting a WordPress site, like Reclaim Hosting. It’s much easier to set up and maintain these sites, and there are even decent mobile apps from which to post, roughly equivalent to what Twitter and Facebook provide. Platforms like WordPress also come with RSS built in, which is one of the critical, open standards that are at the heart of any successful version of the open web in an age of social media. Alas, at this point most people have invested a great deal in their online presence on closed services, and inertia holds them in place.

It is psychological gravity, not technical inertia, however, that is the greater force against the open web. Human beings are social animals and centralized social media like Twitter and Facebook provide a powerful sense of ambient humanity—the feeling that “others are here”—that is often missing when one writes on one’s own site. Facebook has a whole team of Ph.D.s in social psychology finding ways to increase that feeling of ambient humanity and thus increase your usage of their service.

When I left Facebook eight years ago, it showed me five photos of my friends, some with their newborn babies, and asked if I was really sure. It is unclear to me if the re-decentralizers are willing to be, or even should be, as ruthless as this. It’s easier to work on interoperable technology than social psychology, and yet it is on the latter battlefield that the war for the open web will likely be won or lost.

* * *

Meanwhile, thinking globally but acting locally is the little bit that we can personally do. Teaching young people how to set up sites and maintain their own identities is one good way to increase and reinforce the open web. And for those of us who are no longer young, writing more under our own banner may model a better way for those who are to come.

The Significance of the Twitter Archive at the Library of Congress

It started with some techies casually joking around, and ended with the President of the United States being its most avid user. In between, it became the site of comedy and protest, several hundred million human users and countless bots, the occasional exchange of ideas and a constant stream of outrage.

All along, the Library of Congress was preserving it all. Billions of tweets, saved over 12 years, now rub shoulders with books, manuscripts, recordings, and film among the Library’s extensive holdings.

On December 31, however, this archiving will end. The day after Christmas, the Library announced that it would no longer save all tweets after that date, but instead will choose tweets to preserve “on a very selective basis,” for major events, elections, and political import. The rest of Twitter’s giant stream will flow by, untapped and ephemeral.

The Twitter archive may not be the record of our humanity that we wanted, but it’s the record we have. Due to Twitter’s original terms of service and the public availability of most tweets, which stand in contrast to many other social media platforms, such as Facebook and Snapchat, we are unlikely to preserve anything else like it from our digital age.

Undoubtedly many would consider that a good thing, and that the Twitter archive deserves the kind of mockery that flourishes on the platform itself. What can we possibly learn from the unchecked ramblings and ravings of so many, condensed to so few characters?

Yet it’s precisely this offhandedness and enforced brevity that makes the Twitter archive intriguing. Researchers have precious few sources for the plain-spoken language and everyday activities and thought of a large swath of society.

Most of what is archived is indeed done so on a very selective basis, assessed for historical significance at the time of preservation. Until the rise of digital documents and communications, the idea of “saving it all” seemed ridiculous, and even now it seems like a poor strategy given limited resources. Archives have always had to make tough choices about what to preserve and what to discard.

However, it is also true that we cannot always anticipate what future historians will want to see and read from our era. Much of what is now studied from the past are materials that somehow, fortunately, escaped the trash bin. Cookbooks give us a sense of what our ancestors ate and celebrated. Pamphlets and more recently zines document ideas and cultures outside the mainstream.

Historians have also used records in unanticipated ways. Researchers have come to realize that the Proceedings of the Old Bailey, transcriptions from London’s central criminal court, are the only record we have of the spoken words of many people who lived centuries ago but were not in the educated or elite classes. That we have them talking about the theft of a pig rather than the thought of Aristotle only gives us greater insight into the lived experience of their time.

The Twitter archive will have similar uses for researchers of the future, especially given its tremendous scale and the unique properties of the platform behind the short messages we see on it. Preserved with each tweet, but hidden from view, is additional information about tweeters and their followers. Using sophisticated computational methods, it is possible to visualize large-scale connections within the mass of users that will provide a good sense of our social interactions, communities, and divisions.

Since Twitter launched a year before the release of the iPhone, and flourished along with the smartphone, the archive is also a record of what happened when computers evolved from desktop to laptop to the much more personal embrace of our hands.

Since so many of us now worry about the impact of these devices and social media on our lives and mental health, this story and its lessons may ultimately be depressing. As we are all aware, of course, history and human expression are not always sweetness and light.

We should feel satisfied rather than dismissive that we will have a dozen years of our collective human expression to look back on, the amusing and the ugly, the trivial and, perhaps buried deep within the archive, the profound.

Institutionalizing Digital Scholarship (or Anything Else New in a Large Organization)

I recently gave a talk at Brown University on “Institutionalizing Digital Scholarship,” and upon reflection it struck me that the lessons I tried to convey were more generally applicable. Everyone prefers to talk about innovation, rather than institutionalization, but the former can only have a long-term impact if the latter occurs. What at first seems like a dreary administrative matter is actually at the heart of real and lasting change.

New ideas and methods are notoriously difficult to integrate into large organizations. Institutions and the practitioners within them, outside of and within academia (perhaps especially within academia?), too frequently claim to be open-minded but often exhibit a close-mindedness when the new impinges upon their area of work or expertise. One need only look at the reaction to digital humanities and digital scholarship over the last two decades, and the antagonism and disciplinary policing it is still subject to, often from adjacent scholars.

In my talk I drew on the experience of directing the Roy Rosenzweig Center for History and New Media at George Mason University, the Digital Public Library of America, and now the Northeastern University library. The long history of RRCHNM is especially helpful as a case study, since it faced multiple headwinds, and yet thrived, in large part due to the compelling vision of its founder and the careful pursuit of opportunities related to that vision by scores of people over many years.

If you wish to digest the entire subject, please watch my full presentation. But for those short on time, here are the three critical elements of institutionalization I concluded with. If all three of these challenging processes occur, you will know that you have successfully and fully integrated something new into an organization.

Routinizing

At first, new fields and methods are pursued haphazardly, as practitioners try to understand what they are doing and how to do it. In digital scholarship, this meant a lot of experimentation. In the 1990s and early 2000s, digital projects that advanced scholarly theories eclectically tried out new technologies. Websites were often hand-coded and distinctive. But in the long run, such one-off, innovative projects were unsustainable. The new scholarly activity had to be routinized into a common, recognizable grammar and standardized formats and infrastructure, both for audiences to grasp genres and for projects to be technically sustainable over time.

At RRCHNM, this meant that after we realized we were making the same kind of digital historical project over and over, by hand, we created generalized software, Omeka, through which we could host an infinite number of similar projects. Although it reduced flexibility somewhat, Omeka made new digital projects much easier to launch and sustain. Now there are hundreds of institutions that use the software and countless history (and non-history) projects that rely on it.

Normalizing

To become institutionalized, new activities cannot remain on the fringes. They have to become normalized, part of the ordinary set of approaches within a domain. Practitioners shouldn’t even think twice before engaging in them. Even those outside of the discipline have to recognize the validity of the new idea or method; indeed, it should become unremarkable. (Fellow historians of science will catch a reference here to Thomas Kuhn’s “normal science.”) In academia, the path to normalization often—alas, too often—expresses itself primarily around concerns over tenure. But the anxiety is broader than that and relates to how new ideas and methods receive equal recognition (broadly construed) and especially the right support structures in places like the library and information technology unit.

Depersonalizing

The story of anything new often begins with one or a small number of people, like Roy Rosenzweig, who advanced a craft without caring about the routine and the normal. In the long run, however, for new ideas and methods to last, they have to find a way to exist beyond the founders, and beyond those who follow the founders. RRCHNM has now had three directors and hundreds of staffers, but similar centers have struggled or ceased to exist after the departure of their founders. This is perhaps the toughest, and final, aspect of institutionalization. It’s hard to lose someone like Roy. On the other hand, it’s another sign of his strong vision that the center he created was able to carry on and strengthen, now over a decade after he passed away.

Humility and Perspective-Taking: A Review of Alan Jacobs’s How to Think

In Alan Jacobs’s important new book How to Think: A Survival Guide for a World at Odds, he locates thought within our social context and all of the complexities that situation involves: our desire to fit into our current group or an aspirational in-group, our repulsion from other groups, our use of a communal (but often invisibly problematic) shorthand language, our necessarily limited interactions and sensory inputs. With reference to recent works in psychology, he also lays bare our strong inclination to bias and confusion.

However, Jacobs is not by trade a social scientist, and having obsessed about many of the same works as him (Daniel Kahneman’s Thinking, Fast and Slow looms large for both of us), it’s a relief to see a humanist address the infirmity of the mind, with many more examples from literature, philosophy, and religion, and with a plainspoken synthesis of academic research, popular culture, and politics.

How to Think is much more fun than a book with that title has the right to be. Having written myself about the Victorian crisis of faith, I am deeply envious of Jacobs’s ability to follow a story about John Stuart Mill’s depression with one about Wilt Chamberlain’s manic sex life. You will enjoy the read.

But the approachability of this book masks only slightly the serious burden it places on its readers. This is a book that seeks to put us into uncomfortable positions. In fact, it asks us to assume a position from which we might change our positions. Because individual thinking is inextricably related to social groups, this can lead to exceedingly unpleasant outcomes, including the loss of friends or being ostracized from a community. Taking on such risk is very difficult for human beings, the most social of animals. In our age of Twitter, the risk is compounded by our greater number of human interactions, interactions that are exposed online for others to gaze upon and judge.

So what Jacobs asks of us is not at all easy. (Some of the best passages in How to Think are of Jacobs struggling with his own predisposition to fire off hot takes.) It can also seem like an absurd and unwise approach when the other side shows no willingness to put themselves in your shoes. Our current levels of polarization push against much in this book, and the structure and incentives of social media are clearly not helping.

Like any challenge that is hard and risky, overcoming it requires a concerted effort over time. Simple mental tricks will not do. Jacobs thus advocates for, in two alliterative phrases that came to mind while reading his book, habits of humility and practices of perspective-taking. To be part of a healthy social fabric—and to add threads to that fabric rather than rend it—one must constantly remind oneself of the predisposition to error, and one must repeatedly try to pause and consider, if only briefly, the source of other views you are repulsed by. (An alternative title for this book could have been How to Listen.)

Jacobs anticipates some obvious objections. He understands that facile calls for “civility,” which some may incorrectly interpret as Jacobs’ project, is often just repression in disguise. Jacobs also notes that you can still hold strong views, or agree with your group much of the time, in his framing. It’s just that you need to have a modicum of flexibility and ability to see past oneself and one’s group. Disagreements can then be worked out procedurally rather than through demonization.

Indeed, those who accept Jacobs’s call may not actually change their minds that often. What they will have achieved instead, in Jacobs’s most memorable phrase, is “a like-hearted, rather than like-minded,” state that allows them to be more neighborly with those around them and beyond their group. Enlarging the all-too-small circle of such like-hearted people is ultimately what How to Think seeks.

Roy’s World

In one of his characteristically humorous and self-effacing autobiographical stories, Roy Rosenzweig recounted the uneasy feeling he had when he was working on an interactive CD-ROM about American history in the 1990s. The medium was brand new, and to many in academia, superficial and cartoonish compared to a serious scholarly monograph.

Roy worried about how his colleagues and others in the profession would view the shiny disc on the social history of the U.S., and his role in creating it. After a hard day at work on this earliest of digital histories, he went to the gym, and above his treadmill was a television tuned to Entertainment Tonight. Mary Hart was interviewing Fabio, fresh off the great success of his “I Can’t Believe It’s Not Butter” ad campaign. “What’s next for Fabio?” Hart asked him. He replied: “Well, Mary, I’m working on an interactive CD-ROM.”

Roy Rosenzweig

Ten years ago today Roy Rosenzweig passed away. Somehow it has now been longer since he died than the period of time I was fortunate enough to know him. It feels like the opposite, given the way the mind sustains so powerfully the memory of those who have had a big impact on you.

The field that Roy founded, digital history, has also aged. So many more historians now use digital media and technology to advance their discipline that it no longer seems new or odd like an interactive CD-ROM.

But what hasn’t changed is Roy’s more profound vision for digital history. If anything, more than ever we live in Roy’s imagined world. Roy’s passion for open access to historical documents has come to fruition in countless online archives and the Digital Public Library of America. His drive to democratize not only access to history but also the historical record itself—especially its inclusion of marginalized voices—can been seen in the recent emphasis on community archive-building. His belief that history should be a broad-based shared enterprise, rather than the province of the ivory tower, can be found in crowdsourcing efforts and tools that allow for widespread community curation, digital preservation, and self-documentation.

It still hurts that Roy is no longer with us. Thankfully his mission and ideas and sensibilities are as vibrant as ever.

Introducing the What’s New Podcast

whats_new_logo_NU-white-background

My new podcast, What’s New, has launched, and I’m truly excited about the opportunity to explore new ideas and discoveries on the show. What’s New will cover a wide range of topics, from the humanities, social sciences, natural sciences, and technology, and it is intended for anyone who wants to learn new things. I hope that you’ll subscribe today on iTunes, Google Play, or SoundCloud.

I hugely enjoyed doing the Digital Campus podcast that ran from 2007-2015, and so I’m thrilled to return to this medium. Unlike Digital Campus, which took the format of a roundtable with several colleagues from George Mason University, on What’s New I’ll be speaking largely one-on-one with experts, at Northeastern University and well beyond, to understand how their research is changing our understanding of the world, and might improve the human condition. In a half-hour podcast you’ll come away with a better sense of cutting-edge scientific and medical discoveries, the latest in public policy and social movements, and the newest insights of literature and history.

I know that the world seems impossibly complex and troubling right now, but one of themes of What’s New is that while we’re all paying closer attention to the loud drumbeat of social media, there are people in many disciplines making quieter advances, innovations, and creative works that may enlighten and help us in the near future. So if you’re looking for a podcast with a little bit of optimism to go along with the frank discussion of the difficulties we undoubtedly face, What’s New is for you.

Age of Asymmetries

Cory Doctorow’s 2008 novel Little Brother traces the fight between hacker teens and an overactive surveillance state emboldened by a terrorist attack in San Francisco. The novel details in great depth the digital tools of the hackers, especially the asymmetry of contemporary cryptography. Simply put, today’s encryption is based on mathematical functions that are really easy in one direction—multiplying two prime numbers to get a large number—and really hard in the opposite direction—figuring out the two prime numbers that were multiplied together to get that large number.

Doctorow’s speculative future also contains asymmetries that are more familiar to us. Terrorist attacks are, alas, all too easy to perpetrate and hard to prevent. On the internet, it is easy to be loud and to troll and to disseminate hate, and hard to counteract those forces and to more quietly forge bonds.

The mathematics of cryptography are immutable. There will always be an asymmetry between that which is easy and that which is hard. It is how we address the addressable asymmetries of our age, how we rebalance the unbalanced, that will determine what our future actually looks like.

Irrationality and Human-Computer Interaction

When the New York Times let it be known that their election-night meter—that dial displaying the real-time odds of a Democratic or Republican win—would return for Georgia’s 6th congressional district runoff after its notorious November 2016 debut, you could almost hear a million stiff drinks being poured. Enabled by the live streaming of precinct-by-precinct election data, the dial twitches left and right, pauses, and then spasms into another movement. It’s a jittery addition to our news landscape and the source of countless nightmares, at least for Democrats.

We want to look away, and yet we stare at the meter for hours, hoping, praying. So much so that, perhaps late at night, we might even believe that our intensity and our increasingly firm grip on our iPhones might affect the outcome, ever so slightly.

Which is silly, right?

*          *          *

Thirty years ago I opened a bluish-gray metal door and entered a strange laboratory that no longer exists. Inside was a tattered fabric couch, which faced what can only be described as the biggest pachinko machine you’ve ever seen, as large as a giant flat-screen TV. Behind a transparent Plexiglas front was an array of wooden pegs. At the top were hundreds of black rubber balls, held back by a central gate. At the bottom were vertical slots.

A young guy—like me, a college student—sat on the couch in a sweatshirt and jeans. He was staring intently at the machine. So intently that I just froze, not wanting to get in the way of his staring contest with the giant pinball machine.

He leaned in. Then the balls started releasing from the top at a measured pace and they chaotically bounced around and down the wall, hitting peg after peg until they dropped into one of the columns at the bottom. A few minutes later, those hundreds of rubber balls had formed a perfectly symmetrical bell curve in the columns.

The guy punched the couch and looked dispirited.

I unfroze and asked him the only phrase I could summon: “Uh, what’s going on?”

“I was trying to get the balls to shift to the left.”

“With what?”

“With my mind.”

*          *          *

This was my first encounter with the Princeton Engineering Anomalies Research program, or PEAR. PEAR’s stated mission was to pursue an “experimental agenda of studying the interaction of human consciousness with sensitive physical devices, systems, and processes,” but that prosaic academic verbiage cloaked a far cooler description: PEAR was on the hunt for the Force.

This was clearly bananas, and also totally enthralling for a nerdy kid who grew up on Star Wars. I needed to know more. Fortunately that opportunity presented itself through a new course at the university: “Human-Computer Interaction.” I’m not sure I fully understood what it was about before I signed up for it.

The course was team-taught by prominent faculty in computer science, psychology, and engineering. One of the professors was George Miller, a founder of cognitive psychology, who was the first to note that the human mind was only capable of storing seven-digit numbers (plus or minus two digits). And it included engineering professor Robert Jahn, who had founded PEAR and had rather different notions of our mental capacity.

*          *          *

One of the perks of being a student in Human-Computer Interaction was that you were not only welcome to stop by the PEAR lab, but you could also engage in the experiments yourself. You would just sign up for a slot and head to the basement of the engineering quad, where you would eventually find the bluish-gray door.

By the late 1980s, PEAR had naturally started to focus on whether our minds could alter the behavior of a specific, increasingly ubiquitous machine in our lives: the computer. Jahn and PEAR’s co-founder, Brenda Dunne, set up several rooms with computers and shoebox-sized machines with computer chips in them that generated random numbers on old-school red LED screens. Out of the box snaked a cord with a button at the end.

You would book your room, take a seat, turn on the random-number generator, and flip on the PC sitting next to it. Once the PC booted up, you would type in a code—as part of the study, no proper names were used—to log each experiment. Then the shoebox would start showing numbers ranging from 0.00 to 2.00 so quickly that the red LED became a blur. You would click on the button to stop the digits, and then that number was recorded by the computer.

The goal was to try to stop the rapidly rotating numbers on a number over 1.00, to push the average up as far as possible. Over dozens of turns the computer’s monitor showed how far that average diverged from 1.00.

That’s a clinical description of the experiment. In practice, it was a half-hour of tense facial expressions and sweating, a strange feeling of brow-beating a shoebox with an LED, and some cursing when you got several sub-1.00 numbers in a row. It was human-computer interaction at its most emotional.

Jahn and Dunne kept the master log of the codes and the graphs. There were rumors that some of the codes—some of the people those codes represented—had discernable, repeatable effects on the random numbers. Over many experiments, they were able to make the average rise, ever so slightly but enough to be statistically significant.

In other words, there were Jedis in our midst.

Unfortunately, over several experiments—and a sore thumb from clicking on the button with increasing pressure and frustration—I had no luck affecting the random numbers. I stared at the graph without blinking, hoping to shift the trend line upwards with each additional stop. But I ended up right in the middle, as if I had flipped a coin a thousand times and gotten 500 heads and 500 tails. Average.

*          *          *

Jahn and Dunne unsurprisingly faced sustained criticism and even some heckling, on campus and beyond. When PEAR closed in 2007, all the post-mortems dutifully mentioned the editor of a journal who said he could accept a paper from the lab “if you can telepathically communicate it to me.” It’s a good line, and it’s tempting to make even more fun of PEAR these many years later.

The same year that PEAR closed its doors, the iPhone was released, and with it a new way of holding and touching and communicating with a computer. We now stare intently at these devices for hours a day, and much of that interaction is—let’s admit it—not entirely rational.

We see those three gray dots in a speech bubble and deeply yearn for a good response. We open the stocks app and, in the few seconds it takes to update, pray for green rather than red numbers. We go to the New York Times on election eve and see that meter showing live results, and more than anything we want to shift it to the left with our minds.

When asked by what mechanism the mind might be able to affect a computer, Jahn and Dunne hypothesized that perhaps there was something like an invisible Venn diagram, whereby the ghost in the machine and the ghost in ourselves overlapped ever so slightly. A permeability between silicon and carbon. An occult interface through which we could ever so slightly change the processes of the machine itself and what it displays to us seconds later.

A silly hypothesis, perhaps. But we often act like it is all too real.

What’s the Matter with Ebooks: An Update

In an earlier post I speculated about the plateau in ebook adoption. According to recent statistics from publishers we are now actually seeing a decline in ebook sales after a period of growth (and then the leveling off that I discussed before). Here’s my guess about what’s going on—an educated guess, supported by what I’m hearing from my sources and network.

First, re-read my original post. I believe it captured a significant part of the story. A reminder: when we hear about ebook sales we hear about the sales from (mostly) large publishers and I have no doubt that ebooks are a troubled part of their sales portfolio. But there are many other ebooks than those reported by the publishers that release their stats, and ways to acquire them, and thus there’s a good chance that there’s considerable “dark reading” (as I called it) that accounts for the disconnect between the surveys that say that e-reading is growing while sales (again, from the publishers that reveal these stats) are declining.

The big story I now perceive is a bifurcation of the market between what used to be called high and low culture. For genre fiction (think sexy vampires) and other genres where there is a lot of self-publishing, readers seem to be moving to cheap (often 99 cent) ebooks from Amazon’s large and growing self-publishing program. Amazon doesn’t release its ebook sales stats, but we know that they already have 65% of the ebook market and through their self-publishing program may reach a disturbing 90% in a few years. Meanwhile, middle- and high-brow books for the most part remain at traditional publishers, where advances still grease the wheels of commerce (and writing).

Other changes I didn’t discuss in my last post are also happening that impact ebook adoption. Audiobook sales rose by an astonishing 40% over the last year, a notable story that likely impacts ebook growth—for the vast majority of those with smartphones, they are substitutes (see also the growth in podcasts). In addition, ebooks have gotten more expensive in the past few years, while print (especially paperback) prices have become more competitive; for many consumers, a simple Econ 101 assessment of pricing accounts for the ebook stall.

I also failed to account in my earlier post for the growing buy-local movement that has impacted many areas of consumption—see vinyl LPs and farm-to-table restaurants—and is, in part, responsible for the turnaround in bookstores—once dying, now revived—an encouraging trend pointed out to me by Oren Teicher, the head of the American Booksellers Association. These bookstores were clobbered by Amazon and large chains late last decade but have recovered as the buy-local movement has strengthened and (more behind the scenes, but just as important) they adopted technology and especially rapid shipping mechanisms that have made them more competitive.

Personally, I continue to read in both print and digitally, from my great local public library and from bookstores, and so I’ll end with an anecdotal observation: there’s still a lot of friction in getting an ebook versus a print book, even though one would think it would be the other way around. Libraries still have poor licensing terms from publishers that treat digital books like physical books that can only be loaned to one person at a time despite the affordances of ebooks; ebooks are often not that much cheaper, if at all, than physical books; and device-dependency and software hassles cause other headaches. And as I noted in my earlier post, there’s still not a killer e-reading device. The Kindle remains (to me and I suspect many others) a clunky device with a poor screen, fonts, etc. In my earlier analysis, I probably also underestimated the inertial positive feeling of physical books for most readers—which I myself feel as a form of consumption that reinforces the benefits of the physical over the digital.

It seems like all of these factors—pricing, friction, audiobooks, localism, and traditional physical advantages—are combining to restrict the ebook market for “respectable” ebooks and to shift them to Amazon for “less respectable” genres. It remains to be seen if this will hold, and I continue to believe that it would be healthy for us to prepare for, and create, a better future with ebooks.

css.php