Books

Humility and Perspective-Taking: A Review of Alan Jacobs’s How to Think

In Alan Jacobs’s important new book How to Think: A Survival Guide for a World at Odds, he locates thought within our social context and all of the complexities that situation involves: our desire to fit into our current group or an aspirational in-group, our repulsion from other groups, our use of a communal (but often invisibly problematic) shorthand language, our necessarily limited interactions and sensory inputs. With reference to recent works in psychology, he also lays bare our strong inclination to bias and confusion.

However, Jacobs is not by trade a social scientist, and having obsessed about many of the same works as him (Daniel Kahneman’s Thinking, Fast and Slow looms large for both of us), it’s a relief to see a humanist address the infirmity of the mind, with many more examples from literature, philosophy, and religion, and with a plainspoken synthesis of academic research, popular culture, and politics.

How to Think is much more fun than a book with that title has the right to be. Having written myself about the Victorian crisis of faith, I am deeply envious of Jacobs’s ability to follow a story about John Stuart Mill’s depression with one about Wilt Chamberlain’s manic sex life. You will enjoy the read.

But the approachability of this book masks only slightly the serious burden it places on its readers. This is a book that seeks to put us into uncomfortable positions. In fact, it asks us to assume a position from which we might change our positions. Because individual thinking is inextricably related to social groups, this can lead to exceedingly unpleasant outcomes, including the loss of friends or being ostracized from a community. Taking on such risk is very difficult for human beings, the most social of animals. In our age of Twitter, the risk is compounded by our greater number of human interactions, interactions that are exposed online for others to gaze upon and judge.

So what Jacobs asks of us is not at all easy. (Some of the best passages in How to Think are of Jacobs struggling with his own predisposition to fire off hot takes.) It can also seem like an absurd and unwise approach when the other side shows no willingness to put themselves in your shoes. Our current levels of polarization push against much in this book, and the structure and incentives of social media are clearly not helping.

Like any challenge that is hard and risky, overcoming it requires a concerted effort over time. Simple mental tricks will not do. Jacobs thus advocates for, in two alliterative phrases that came to mind while reading his book, habits of humility and practices of perspective-taking. To be part of a healthy social fabric—and to add threads to that fabric rather than rend it—one must constantly remind oneself of the predisposition to error, and one must repeatedly try to pause and consider, if only briefly, the source of other views you are repulsed by. (An alternative title for this book could have been How to Listen.)

Jacobs anticipates some obvious objections. He understands that facile calls for “civility,” which some may incorrectly interpret as Jacobs’ project, is often just repression in disguise. Jacobs also notes that you can still hold strong views, or agree with your group much of the time, in his framing. It’s just that you need to have a modicum of flexibility and ability to see past oneself and one’s group. Disagreements can then be worked out procedurally rather than through demonization.

Indeed, those who accept Jacobs’s call may not actually change their minds that often. What they will have achieved instead, in Jacobs’s most memorable phrase, is “a like-hearted, rather than like-minded,” state that allows them to be more neighborly with those around them and beyond their group. Enlarging the all-too-small circle of such like-hearted people is ultimately what How to Think seeks.

Standard
Books

What’s the Matter with Ebooks: An Update

In an earlier post I speculated about the plateau in ebook adoption. According to recent statistics from publishers we are now actually seeing a decline in ebook sales after a period of growth (and then the leveling off that I discussed before). Here’s my guess about what’s going on—an educated guess, supported by what I’m hearing from my sources and network.

First, re-read my original post. I believe it captured a significant part of the story. A reminder: when we hear about ebook sales we hear about the sales from (mostly) large publishers and I have no doubt that ebooks are a troubled part of their sales portfolio. But there are many other ebooks than those reported by the publishers that release their stats, and ways to acquire them, and thus there’s a good chance that there’s considerable “dark reading” (as I called it) that accounts for the disconnect between the surveys that say that e-reading is growing while sales (again, from the publishers that reveal these stats) are declining.

The big story I now perceive is a bifurcation of the market between what used to be called high and low culture. For genre fiction (think sexy vampires) and other genres where there is a lot of self-publishing, readers seem to be moving to cheap (often 99 cent) ebooks from Amazon’s large and growing self-publishing program. Amazon doesn’t release its ebook sales stats, but we know that they already have 65% of the ebook market and through their self-publishing program may reach a disturbing 90% in a few years. Meanwhile, middle- and high-brow books for the most part remain at traditional publishers, where advances still grease the wheels of commerce (and writing).

Other changes I didn’t discuss in my last post are also happening that impact ebook adoption. Audiobook sales rose by an astonishing 40% over the last year, a notable story that likely impacts ebook growth—for the vast majority of those with smartphones, they are substitutes (see also the growth in podcasts). In addition, ebooks have gotten more expensive in the past few years, while print (especially paperback) prices have become more competitive; for many consumers, a simple Econ 101 assessment of pricing accounts for the ebook stall.

I also failed to account in my earlier post for the growing buy-local movement that has impacted many areas of consumption—see vinyl LPs and farm-to-table restaurants—and is, in part, responsible for the turnaround in bookstores—once dying, now revived—an encouraging trend pointed out to me by Oren Teicher, the head of the American Booksellers Association. These bookstores were clobbered by Amazon and large chains late last decade but have recovered as the buy-local movement has strengthened and (more behind the scenes, but just as important) they adopted technology and especially rapid shipping mechanisms that have made them more competitive.

Personally, I continue to read in both print and digitally, from my great local public library and from bookstores, and so I’ll end with an anecdotal observation: there’s still a lot of friction in getting an ebook versus a print book, even though one would think it would be the other way around. Libraries still have poor licensing terms from publishers that treat digital books like physical books that can only be loaned to one person at a time despite the affordances of ebooks; ebooks are often not that much cheaper, if at all, than physical books; and device-dependency and software hassles cause other headaches. And as I noted in my earlier post, there’s still not a killer e-reading device. The Kindle remains (to me and I suspect many others) a clunky device with a poor screen, fonts, etc. In my earlier analysis, I probably also underestimated the inertial positive feeling of physical books for most readers—which I myself feel as a form of consumption that reinforces the benefits of the physical over the digital.

It seems like all of these factors—pricing, friction, audiobooks, localism, and traditional physical advantages—are combining to restrict the ebook market for “respectable” ebooks and to shift them to Amazon for “less respectable” genres. It remains to be seen if this will hold, and I continue to believe that it would be healthy for us to prepare for, and create, a better future with ebooks.

Standard
Books

What’s the Matter with Ebooks?

[As you may have noticed, I haven’t posted to this blog for over a year. I’ve been extraordinarily busy with my new job. But I’m going to make a small effort to reinvigorate this space, adding my thoughts on evolving issues that I’d like to explore without those thoughts being improperly attributed to the Digital Public Library of America. This remains my personal blog, and you should consider these my personal views. I will also be continuing to post on DPLA’s blog, as I have done on this topic of ebooks.]

Over the past two years I’ve been tracking ebook adoption, and the statistics are, frankly, perplexing. After Amazon released the Kindle in 2007, there was a rapid growth in ebook sales and readership, and the iPad’s launch three years later only accelerated the trend.

Then something odd happened. By most media accounts, ebook adoption has plateaued at about a third of the overall book market, and this stall has lasted for over a year now. Some are therefore taking it as a Permanent Law of Reading: There will be electronic books, but there will always be more physical books. Long live print!

I read both e- and print books, and I appreciate the arguments about the native advantages of print. I am a digital subscriber to the New York Times, but every Sunday I also get the printed version. The paper feels expansive, luxuriant. And I do read more of it than the daily paper on my iPad, as many articles catch my eye and the flipping of pages requires me to confront pieces that I might not choose to read based on a square inch of blue-tinged screen. (Also, it’s Sunday. I have more time to read.) Even though I read more ebooks than printed ones at this point, it’s hard not to listen to the heart and join the Permanent Law chorus.

But my mind can’t help but disagree with my heart. Yours should too if you run through a simple mental exercise: jump forward 10 or 20 or 50 years, and you should have a hard time saying that the e-reading technology won’t be much better—perhaps even indistinguishable from print, and that adoption will be widespread. Even today, studies have shown that libraries that have training sessions for patrons with iPads and Kindles see the use of ebooks skyrocket—highlighting that the problem is in part that today’s devices and ebook services are hard to use. Availability of titles, pricing (compared to paperback), DRM, and a balkanization of ebook platforms and devices all dampen adoption as well.

But even the editor of the New York Times understands the changes ahead, despite his love for print:

How long will print be around? At a Loyola University gathering in New Orleans last week, the executive editor [of the Times], Dean Baquet, noted that he “has as much of a romance with print as anyone.” But he also admitted, according to a Times-Picayune report, that “no one thinks there will be a lot of print around in 40 years.”

Forty years is a long time, of course—although it is a short time in the history of the book. The big question is when the changeover will occur—next year, in five years, in Baquet’s 2055?

The tea leaves, even now, are hard to read, but I’ve come to believe that part of this cloudiness is because there’s much more dark reading going on than the stats are showing. Like dark matter, dark reading is the consumption of (e)books that somehow isn’t captured by current forms of measurement.

For instance, usually when you hear about the plateauing of ebook sales, you are actually hearing about the sales of ebooks from major publishers in relation to the sales of print books from those same publishers. That’s a crucial qualification. But sales of ebooks from these publishers is just a fraction of overall e-reading. By other accounts, which try to shine light on ebook adoption by looking at markets like Amazon (which accounts for a scary two-thirds of ebook sales), show that a huge and growing percentage of ebooks are being sold by indie publishers or authors themselves rather than the bigs, and a third of them don’t even have ISBNs, the universal ID used to track most books.

The commercial statistics also fail to account for free e-reading, such as from public libraries, which continues to grow apace. The Digital Public Library of America and other sites and apps have millions of open ebooks, which are never chalked up as a sale.

Similarly, while surveys of the young continue to show their devotion to paper, yet other studies have shown that about half of those under 30 read an ebook in 2013, up from a quarter of Millennials in 2011—and that study is already dated. Indeed, most of the studies that highlight our love for print over digital are several years old (or more) at this point, a period in which large-format, high-resolution smartphone adoption (much better for reading) and new all-you-can-read ebook services, such as Oyster, Scribd, and Kindle Unlimited, have emerged. Nineteen percent of Millennials have already subscribed to one of these services, a number considered low by the American Press Institute, but which strikes me as remarkably high, and yet another contributing factor to the dark reading mystery.

I’m a historian, not a futurist, but I suspect that we’re not going to have to wait anywhere near forty years for ebooks to become predominant, and that the “plateau” is in part a mirage. That may cause some hand-wringing among book traditionalists, an emotion that is understandable: books are treasured artifacts of human expression. But in our praise for print we forget the great virtues of digital formats, especially the ease of distribution and greater access for all—if done right.

Standard
Books, Text Mining

A Conversation with Data: Prospecting Victorian Words and Ideas

[An open access, pre-print version of a paper by Fred Gibbs and myself for the Autumn 2011 volume of Victorian Studies. For the final version, please see Victorian Studies at Project MUSE.]

 

Introduction

“Literature is an artificial universe,” author Kathryn Schulz recently declared in the New York Times Book Review, “and the written word, unlike the natural world, can’t be counted on to obey a set of laws” (Schulz). Schulz was criticizing the value of Franco Moretti’s “distant reading,” although her critique seemed more like a broadside against “culturomics,” the aggressively quantitative approach to studying culture (Michel et al.). Culturomics was coined with a nod to the data-intensive field of genomics, which studies complex biological systems using computational models rather than the more analog, descriptive models of a prior era. Schulz is far from alone in worrying about the reductionism that digital methods entail, and her negative view of the attempt to find meaningful patterns in the combined, processed text of millions of books likely predominates in the humanities.

Historians largely share this skepticism toward what many of them view as superficial approaches that focus on word units in the same way that bioinformatics focuses on DNA sequences. Many of our colleagues question the validity of text mining because they have generally found meaning in a much wider variety of cultural artifacts than just text, and, like most literary scholars, consider words themselves to be context-dependent and frequently ambiguous. Although occasionally intrigued by it, most historians have taken issue with Google’s Ngram Viewer, the search company’s tool for scanning literature by n-grams, or word units. Michael O’Malley, for example, laments that “Google ignores morphology: it ignores the meanings of words themselves when it searches…[The] Ngram Viewer reflects this disinterest in meaning. It disambiguates words, takes them entirely out of context and completely ignores their meaning…something that’s offensive to the practice of history, which depends on the meaning of words in historical context.” (O’Malley)

Such heated rhetoric—probably inflamed in the humanities by the overwhelming and largely positive attention that culturomics has received in the scientific and popular press—unfortunately has forged in many scholars’ minds a cleft between our beloved, traditional close reading and untested, computer-enhanced distant reading. But what if we could move seamlessly between traditional and computational methods as demanded by our research interests and the evidence available to us?

In the course of several research projects exploring the use of text mining in history we have come to the conclusion that it is both possible and profitable to move between these supposed methodological poles. Indeed, we have found that the most productive and thorough way to do research, given the recent availability of large archival corpora, is to have a conversation with the data in the same way that we have traditionally conversed with literature—by asking it questions, questioning what the data reflects back, and combining digital results with other evidence acquired through less-technical means.

We provide here several brief examples of this combinatorial approach that uses both textual work and technical tools. Each example shows how the technology can help flesh out prior historiography as well as provide new perspectives that advance historical interpretation. In each experiment we have tried to move beyond the more simplistic methods made available by Google’s Ngram Viewer, which traces the frequency of words in print over time with little context, transparency, or opportunity for interaction.

 

The Victorian Crisis of Faith Publications

One of our projects, funded by Google, gave us a higher level of access to their millions of scanned books, which we used to revisit Walter E. Houghton’s classic The Victorian Frame of Mind, 1830-1870 (1957). We wanted to know if the themes Houghton identified as emblematic of Victorian thought and culture—based on his close reading of some of the most famous works of literature and thought—held up against Google’s nearly comprehensive collection of over a million Victorian books. We selected keywords from each chapter of Houghton’s study—loaded words like “hope,” “faith,” and “heroism” that he called central to the Victorian mindset and character–and queried them (and their Victorian synonyms, to avoid literalism) against a special data set of titles of nineteenth-century British printed works.

The distinction between the words within the covers of a book and those on the cover is an important and overlooked one. Focusing on titles is one way to pull back from a complete lack of context for words (as is common in the Google Ngram Viewer, which searches full texts and makes no distinction about where words occur), because word choice in a book’s title is far more meaningful than word choice in a common sentence. Books obviously contain thousands of words which, by themselves, are not indicative of a book’s overall theme—or even, as O’Malley rightly points out, indicative of what a researcher is looking for. A title, on the other hand, contains the author’s and publisher’s attempt to summarize and market a book, and is thus of much greater significance (even with the occasional flowery title that defies a literal description of a book’s contents). Our title data set covered the 1,681,161 books that were published in English in the UK in the long nineteenth century, 1789-1914, normalized so that multiple printings in a year did not distort the data. (The public Google Ngram Viewer uses only about half of the printed books Google has scanned, tossing—algorithmically and often improperly—many Victorian works that appear not to be books.)

Our queries produced a large set of graphs portraying the changing frequency of thematic words in titles, which were arranged in grids for an initial, human assessment (fig. 1). Rather than accept the graphs as the final word (so to speak), we used this first, prospecting phase to think through issues of validity and significance.

 

Fig. 1. A grid of search results showing the frequency of a hundred words in the titles of books and their change between 1789 and 1914. Each yearly total is normalized against the total number of books produced that year, and expressed as a percentage of all publications.

Upon closer inspection, many of the graphs represented too few titles to be statistically meaningful (just a handful of books had “skepticism” in the title, for instance), showed no discernible pattern (“doubt” fluctuates wildly and randomly), or, despite an apparently significant trend, were unhelpful because of the shifting meaning of words over time.

However, in this first pass at the data we were especially surprised by the sharp rise and fall of religious words in book titles, and our thoughts naturally turned to the Victorian crisis of faith, a topic Houghton also dwelled on. How did the religiosity and then secularization of nineteenth-century literature parallel that crisis, contribute to it, or reflect it? We looked more closely at book titles involving faith. For instance, books that have the words “God” or “Christian” in the title rise as a percentage of all works between the beginning of the nineteenth century and the middle of the century, and then fall precipitously thereafter. After appearing in a remarkable 1.2% of all book titles in the mid-1850s, “God” is present in just one-third of one percent of all British titles by the first World War (fig. 2). “Christian” titles peak at nearly one out of fifty books in 1841, before dropping to one out of 250 by 1913 (fig. 3). The drop is particularly steep between 1850 and 1880.

Fig. 2. The percentage of books published in each year in English in the UK from 1789-1914 that contain the word “God” in their title.

Fig. 3. The percentage of books published in each year in English in the UK from 1789-1914 that contain the word “Christian” in their title.

These charts are as striking as any portrayal of the crisis of faith that took place in the Victorian era, an important subject for literary scholars and historians alike. Moreover, they complicate the standard account of that crisis. Although there were celebrated cases of intellectuals experiencing religious doubt early in the Victorian age, most scholars believe that a more widespread challenge to religion did not occur until much later in the nineteenth century (Chadwick). Most scientists, for instance, held onto their faith even in the wake of Darwin’s Origin of Species (1859), and the supposed conflict of science and religion has proven largely illusory (Turner). However, our work shows that there was a clear collapse in religious publishing that began around the time of the 1851 Religious Census, a steep drop in divine works as a portion of the entire printed record in Britain that could use further explication. Here, publishing appears to be a leading, rather than a lagging, indicator of Victorian culture. At the very least, rather than looking at the usual canon of books, greater attention by scholars to the overall landscape of publishing is necessary to help guide further inquiries.

More in line with the common view of the crisis of faith is the comparative use of “Jesus” and “Christ.” Whereas the more secular “Jesus” appears at a relatively constant rate in book titles (fig. 4, albeit with some reduction between 1870 and 1890), the frequency of titles with the more religiously charged “Christ” drops by a remarkable three-quarters beginning at mid-century (fig. 5).

Fig. 4. The percentage of books published in each year in English in the UK from 1789-1914 that contain the word “Jesus” in their title.

Fig. 5. The percentage of books published in each year in English in the UK from 1789-1914 that contain the word “Christ” in their title.

 

Open-ended Investigations

Prospecting a large textual corpus in this way assumes that one already knows the context of one’s queries, at least in part. But text mining can also inform research on more open-ended questions, where the results of queries should be seen as signposts toward further exploration rather than conclusive evidence. As before, we must retain a skeptical eye while taking seriously what is reflected in a broader range of printed matter than we have normally examined, and how it might challenge conventional wisdom.

The power of text mining allows us to synthesize and compare sources that are typically studied in isolation, such as literature and court cases. For example, another text-mining project focused on the archive of Old Bailey trials brought to our attention a sharp increase in the rate of female bigamy in the late nineteenth century, and less harsh penalties for women who strayed. (For more on this project, see http://criminalintent.org.) We naturally became curious about possible parallels with how “marriage” was described in the Victorian age—that is, how, when, and why women felt at liberty to abandon troubled unions. Because one cannot ask Google’s Ngram Viewer for adjectives that describe “marriage” (scholars have to know what they are looking for in advance with this public interface), we directly queried the Google n-gram corpus for statistically significant descriptors in the Victorian age. Reading the result set of bigrams (two-word couplets) with “marriage” as the second word helped us derive a more narrow list of telling phrases. For instance, bigrams that rise significantly over the nineteenth century include “clandestine marriage,” “forbidden marriage,” “foreign marriage,” “fruitless marriage,” “hasty marriage,” “irregular marriage,” “loveless marriage,” and “mixed marriage.” Each bigram represents a good opportunity for further research on the characterization of marriage through close reading, since from our narrowed list we can easily generate a list of books the terms appear in, and many of those works are not commonly cited by scholars because they are rare or were written by less famous authors. Comparing literature and court cases in this way, we have found that descriptions of failed marriages in literature rose in parallel with male bigamy trials, and approximately two decades in advance of the increase in female bigamy trials, a phenomenon that could use further analysis through close reading.

To be sure, these open-ended investigations can sometimes fall flat because of the shifting meaning of words. For instance, although we are both historians of science and are interested in which disciplines are characterized as “sciences” in the Victorian era (and when), the word “science” retained its traditional sense of “organized knowledge” so late into the nineteenth century as to make our extraction of fields described as a “science”—ranging from political economy (368 occurrences) and human [mind and nature] (272) to medicine (105), astronomy (86), comparative mythology (66), and chemistry (65)—not particularly enlightening. Nevertheless, this prospecting arose naturally from the agnostic searching of a huge number of texts themselves, and thus, under more carefully constructed conditions, could yield some insight into how Victorians conceptualized, or at least expressed, what qualified as scientific.

Word collocation is not the only possibility, either. Another experiment looked at what Victorians thought was sinful, and how those views changed over time. With special data from Google, we were able to isolate and condense the specific contexts around the phrase “sinful to” (50 characters on either side of the phrase and including book titles in which it appears) from tens of thousands of books. This massive query of Victorian books led to a result set of nearly a hundred pages of detailed descriptions of acts and behavior Victorian writers classified as sinful. The process allowed us to scan through many more books than we could through traditional techniques, and without having to rely solely on opaque algorithms to indicate what the contexts are, since we could then look at entire sentences and even refer back to the full text when necessary.

In other words, we can remain close to the primary sources and actively engage them following computational activity. In our initial read of these thousands of “snippets” of sin (as Google calls them), we were able to trace a shift from biblically freighted terms to more secular language. It seems that the expanding realm of fiction especially provided space for new formulations of sin than did the more dominant devotional tracts of the early Victorian age.

 

Conclusion

Experiments such as these, inchoate as they may be, suggest how basic text mining procedures can complement existing research processes in fields such as literature and history. Although detailed exegeses of single works undoubtedly produce breakthroughs in understanding, combining evidence from multiple sources and multiple methodologies has often yielded the most robust analyses. Far from replacing existing intellectual foundations and research tactics, we see text mining as yet another tool for understanding the history of culture—without pretending to measure it quantitatively—a means complementary to how we already sift historical evidence. The best humanities work will come from synthesizing “data” from different domains; creative scholars will find ways to use text mining in concert with other cultural analytics.

In this context, isolated textual elements such as n-grams aren’t universally unhelpful; examining them can be quite informative if used appropriately and with its limitations in mind, especially as preliminary explorations combined with other forms of historical knowledge. It is not the Ngram Viewer or Google searches that are offensive to history, but rather making overblown historical claims from them alone. The most insightful humanities research will likely come not from charting individual words, but from the creative use of longer spans of text, because of the obvious additional context those spans provide. For instance, if you want to look at the history of marriage, charting the word “marriage” itself is far less interesting than seeing if it co-occurs with words like “loving” or “loveless,” or better yet extracting entire sentences around the term and consulting entire, heretofore unexplored works one finds with this method. This allows for serendipity of discovery that might not happen otherwise.

Any robust digital research methodology must allow the scholar to move easily between distant and close reading, between the bird’s eye view and the ground level of the texts themselves. Historical trends—or anomalies—might be revealed by data, but they need to be investigated in detail in order to avoid conclusions that rest on superficial evidence. This is also true for more traditional research processes that rely too heavily on just a few anecdotal examples. The hybrid approach we have briefly described here can help scholars discover exactly which books, chapters, or pages to focus on, without relying solely on sophisticated algorithms that might filter out too much. Flexibility is crucial, as there is no monolithic digital methodology that can applied to all research questions. Rather than disparage the “digital” in historical research as opposed to the spirit of humanistic inquiry, and continue to uphold a false dichotomy between close and distant reading, we prefer the best of both worlds for broader and richer inquiries than are possible using traditional methodologies alone.

 

Bibliography

Chadwick, Owen. The Victorian Church. New York: Oxford University Press, 1966.

Houghton, Walter Edwards. The Victorian Frame of Mind, 1830-1870. New Haven: Published for Wellesley College by Yale University Press, 1957.

Schulz, Kathryn. “The Mechanic Muse – What Is Distant Reading?” The New York Times 24 Jun. 2011, BR14.

Michel, Jean-Baptiste et al. “Quantitative Analysis of Culture Using Millions of Digitized Books.” Science 331.6014 (2011): 176 -182.

O’Malley, Michael. “Ngrammatic.” The Aporetic, December 21, 2010, http://theaporetic.com/?p=1369.

Turner, Frank M. Between Science and Religion; the Reaction to Scientific Naturalism in Late Victorian England. New Haven: Yale University Press, 1974.

Standard
Books, Reading, Web Design

Reading and Believing

Rather than focusing on a new technology or website in our year-end review on the Digital Campus podcast, I chose reading as the big story of 2011. Surely 2011 was the year that digital reading came of age, with iPad and Kindle sales skyrocketing, apps for reading flourishing, and sites for finding high-quality long-form writing proliferating. It was apropos that Alan Jacobs‘s wonderful book The Pleasures of Reading in an Age of Distraction was published in 2011.

Indeed, the relationship between reading and distraction was one of the things that caught my eye reading Daniel Kahneman‘s essential Thinking, Fast and Slow. Kahneman speaks of two systems in the mind—he eschews “intuition” and “reason” for the more neutral “System 1” and “System 2″—with the first making quick, unconscious assessments and the second engaging in much more studious, and laborious, calculations. Since our minds (like our bodies) are naturally lazy, we prefer to stick with System 1’s judgments as much as possible, unless jarred out of it into the grumpier System 2.

In the fifth chapter of Thinking, Fast and Slow, Kahneman addresses the act of reading, and the impulse—even in what is normally thought of as the most cerebral of human acts—to fall back on System 1, to associate the ease of reading with the truth of what is read:

How do you know that a statement is true? If it is strongly linked by logic or association to other beliefs or preferences you hold, or comes from a source you trust and like, you will feel a sense of cognitive ease. The trouble is that there may be other causes for your feeling of ease—including the quality of the font and the appealing rhythm of the prose—and you have no simple way of tracing your feelings to their source.

Thus the context writing exists in and other aspects unrelated to the actual content are critical to the reception that writing receives. In addition to studies on the effects of different fonts on credibility, Kahneman also cites experiments that show the importance of the quality of paper (for printed materials), of the contrast between a font and its background, and of the presence of distractions that reduce the cognitive ease of reading. In short, environments that make it easy to read also make it easy to believe what is being read. Perhaps the most unsettling aspect of this mixture of context and content is that is it extremely difficult for you to separate the two.

So legibility and the absence of distractions are not just design niceties; when a reader chooses to move an article into an app like Instapaper, they are strongly increasing the odds that they will like what they read and agree with it. And since readers often make that relocation at the recommendation of a trusted source, the written work is additionally “framed” as worthy even before the act of reading has begun.

Commercial publishers may not like the use of Instapaper or Readability, which strip the distractions otherwise known as ads from a cluttered website to focus solely on the text at hand, but they are an unalloyed good for writers.

Standard
Books, Collaboration, Crowdsourcing, Hacking, Publishing

Some Thoughts on the Hacking the Academy Process and Model

I’m delighted that the edited version of Hacking the Academy is now available on the University of Michigan’s DigitalCultureBooks site. Here are some of my quick thoughts on the process of putting the book together. (For more, please read the preface Tom Scheinfeldt and I wrote.)

1) Be careful what you wish for. Although we heavily promoted the submission process for HTA, Tom and I had no idea we would receive over 300 contributions from nearly 200 authors. This put an enormous, unexpected burden on us; it obviously takes a long time to read through that many submissions. Tom and I had to set up a collaborative spreadsheet for assessing the contributions, and it took several months to slog through the mass. We also had to make tough decisions about what kind of work to include, since we were not overly prescriptive about what we were looking for. A large number of well-written, compelling pieces (including many from friends of ours) had to be left out of the volume, unfortunately, because they didn’t quite match our evolving criteria, or didn’t fit with other pieces in the same chapter.

2) Set aside dedicated time and people. Other projects that have crowdsourced volumes, such as Longshot Magazine, have well-defined crunch times for putting everything together, using an expanded staff and a lot of coffee. I think it’s fair to say (and I hope not haughty to say) that Tom and I are incredibly busy people and we had to do the assembly and editing in bits and pieces. I wish we could have gotten it done much sooner to sustain the energy of the initial week. We probably could have included others in the editing process, although I think we have good editorial consistency and smooth transitions because of the more limited control.

3) Get the permissions set from the beginning. One of the delays on the edited volume was making sure we had the rights to all of the materials. HTA has made us appreciate even more the importance of pushing for Creative Commons licenses (especially the simple CC-BY) in academia; many of our contributors are dedicated to open access and already had licensed their materials under a permissive reproduction license, but we had to annoy everyone else (and by “we,” I mean the extraordinary helpful and capable Shana Kimball at MPublishing). This made the HTA process a little more like a standard publication, where the press has to hound contributors for sign-offs, adding friction along the way.

4) Let the writing dictate the form, not vice versa. I think one of the real breakthroughs that Tom and I had in this process is realizing that we didn’t need to adhere to a standard edited-volume format of same-size chapters. After reading through odd-sized submissions and thinking about form, we came up with an array of “short, medium, long” genres that could fit together on a particular theme. Yes, some of the good longer pieces could stand as more-or-less standard essays, but others could be paired together or set into dialogues. It was liberating to borrow some conventions from, e.g., magazines and the way they handle shorter pieces. In some cases we also got rather aggressive about editing down articles so that they would fit into useful spaces.

5) This is a model that can be repeated. Sure, it’s not ideal for some academic cases, and speed is not necessarily of the essence. But for “state of the field” volumes, vibrant debates about new ideas, and books that would benefit from blended genres, it seems like an improvement upon the staid “you have two years to get me 8,000 words for a chapter” model of the edited book.

Standard
Books, Tools, WordPress

Using WordPress as a Book-Writing Platform

I’ve had a few people ask about the writing environment I’m using for The Ivory Tower and the Open Web (introduction posted a couple of days ago). I’m writing the book entirely in WordPress, which really has matured into a terrific authoring platform. Some notes:

1) The addition of the TinyMCE WYSIWYG text-editing tools made WordPress today’s version of the beloved Word 5.1, the lean, mean, writing machine that Word used to be before Microsoft bloated it beyond recognition.

2) WordPress 3.2 joined the distraction-free trend mainstreamed by apps like Scrivener and Instapaper, where computer administrative debris (as Edward Tufte once called the layers of eye-catching controls that frame most application windows) fades away. If you go into full-screen mode in the editor everything disappears but your text. WordPress devs even thoughtfully added a zen “Just write” prompt to get you going. Go full-screen in your browser for extra zen.

3) For footnotes, I’m using the excellent WP-Footnotes plugin, which is not only easy to use but (perhaps critically for the future) degrades gracefully into parenthetical embedded citations outside of WordPress.

4) I’m of course using Zotero to insert and format those footnotes, using one of the features that makes Zotero better (IMHO) than other research managers: the ability to drag and drop formatted citations right from the Zotero interface into a textarea in the browser. (WP-Footnotes handles the automatic numbering.)

5) I’ve done a few tweaks to WordPress’s wp-admin CSS to customize the writing environment (there’s an “editorcontainer” that styles the textarea). In particular, I found the default width too wide for comfortable writing or reading. So I resized it to 500 pixels, which is roughly the line width of a standard book.

Standard