Humane Ingenuity 17: All THAT and More

A rather nice letterpress QR code from Northeastern University’s traditional print technology lab, Huskiana Press. (Via Ryan Cordell, who is the founder and proprietor of Huskiana. It’s great to have this on our campus.)


More than THAT

“Less talk, more grok.” That was one of our early mottos at THATCamp, The Humanities and Technology Camp, which started at the Roy Rosenzweig Center for History and New Media at George Mason University in 2008. It was a riff on “Less talk, more rock,” the motto of WAAF, the hard rock station in Worcester, Massachusetts.

And THATCamp did just that: it widely disseminated an understanding of digital media and technology, provided guidance on the ways to apply that tech toward humanistic ends like writing, reading, history, literature, religion, philosophy, libraries, archives, and museums, and provided space and time to dream of new technology that could serve humans and the humanities, to thousands of people in hundreds of camps as the movement spread. (I would semi-joke at the beginning of each THATCamp that it wasn’t an event but a “movement, like the Olympics.”) Not such a bad feat for a modestly funded, decentralized, peer-to-peer initiative.

THATCamp as an organization has decided to wind down this week after a dozen successful years, and they have asked for reflections. My reflection is that THATCamp was, critically, much more than THAT. Yes, there was a lot of technology, and a lot of humanities. But looking back on its genesis and flourishing, I think there were other ingredients that were just as important. In short, THATCamp was animated by a widespread desire to do academic things in a way that wasn’t very academic.

As the cheeky motto implied, THATCamp pushed back against the normal academic conference modes of panels and lectures, of “let me tell you how smart I am” pontificating, of questions that are actually overlong statements. Instead, it tried to create a warmer, helpful environment of humble, accessible peer-to-peer teaching and learning. There was no preaching allowed, no emphasis on your own research or projects.

THATCamp was non-hierarchical. Before the first THATCamp, I had never attended a conference—nor have I been to one since my last THATCamp, alas—that included tenured and non-tenured and non-tenure-track faculty, graduate and undergraduate students, librarians and archivists and museum professionals, software developers and technologists of all kinds, writers and journalists, and even curious people from well beyond academia and the cultural heritage sector—and that truly placed them at the same level when the entered the door. Breakout sessions always included a wide variety of participants, each with something to teach someone else, because after all, who knows everything.

Finally, as virtually everyone who has written a retrospective has emphasized, THATCamp was fun. By tossing off the seriousness, the self-seriousness, of standard academic behavior, it freed participants to experiment and even feel a bit dumb as they struggled to learn something new. That, in turn, led to a feeling of invigoration, not enervation. The carefree attitude was key.

Was THATCamp perfect, free of issues? Of course not. Were we naive about the potential of technology and blind to its problems? You bet, especially as social media and big tech expanded in the 2010s. Was it inevitable that digital humanities would revert to the academic mean, to criticism and debates and hierarchical structures? I suppose so.

Nevertheless, something was there, is there: THATCamp was unapologetically engaging and friendly. Perhaps unsurprisingly, I met and am still friends with many people who attended the early THATCamps. I look at photos from over a decade ago, and I see people that to this day I trust for advice and good humor. I see people collaborating to build things together without much ego.

Thankfully, more than a bit of the THATCamp spirit lingers. THATCampers (including many in the early THATCamp photo above) went on to collaboratively build great things in libraries and academic departments, to start small technology companies that helped others rather than cashing in, to write books about topics like generosity, to push museums to release their collections digitally to the public. All that and more.

By cosmic synchronicity, WAAF also went off the air this week. The final song they played was “Black Sabbath,” as the station switched at midnight to a contemporary Christian format. THATCamp was too nice to be that metal, but it can share in the final on-air words from WAAF’s DJ: “Well, we were all part of something special.”

(Cross-posted from my blog.)


While I’m reminiscing: The first day of my freshman year of college, I met Waleed Meleis, who lived across the hall. I came to know him as a brilliant engineer with a humanist heart. After graduation, I didn’t see him for 25 years, until I bumped into him on my first day at Northeastern. He created and runs the Enabling Engineering initiative, a student group that designs and builds devices to empower individuals with physical and cognitive disabilities. This is, of course, fully in the spirit of Humane Ingenuity, and I always look forward to new projects from EE.

I saw Waleed this week and he told me that his students have some great new projects in the pipeline. I’ll be sure to include them here as they develop.


One of the treasures in the special collections of my library is a mysterious late medieval volume of unknown origin called the Dragon Prayer Book. (I like to think of it as our Voynich Manuscript.) A literature professor, some of her students, and some scientists analyzed it over the last year, and here’s a fun short video about what they found:


This week on the What’s New podcast from the Northeastern University Library, I talk with Philip Thai, a historian of China, who has a new book out on the role that tariffs, smuggling, and the black market played in the rise of modern China, and how these economic and social elements continue to influence the views of the Chinese government and public. Tune in.

More than THAT

“Less talk, more grok.” That was one of our early mottos at THATCamp, The Humanities and Technology Camp, which started at the Roy Rosenzweig Center for History and New Media at George Mason University in 2008. It was a riff on “Less talk, more rock,” the motto of WAAF, the hard rock station in Worcester, Massachusetts.

And THATCamp did just that: it widely disseminated an understanding of digital media and technology, provided guidance on the ways to apply that tech toward humanistic ends like writing, reading, history, literature, religion, philosophy, libraries, archives, and museums, and provided space and time to dream of new technology that could serve humans and the humanities, to thousands of people in hundreds of camps as the movement spread. (I would semi-joke at the beginning of each THATCamp that it wasn’t an event but a “movement, like the Olympics.”) Not such a bad feat for a modestly funded, decentralized, peer-to-peer initiative.

THATCamp as an organization has decided to wind down this week after a dozen successful years, and they have asked for reflections. My reflection is that THATCamp was, critically, much more than THAT. Yes, there was a lot of technology, and a lot of humanities. But looking back on its genesis and flourishing, I think there were other ingredients that were just as important. In short, THATCamp was animated by a widespread desire to do academic things in a way that wasn’t very academic.

As the cheeky motto implied, THATCamp pushed back against the normal academic conference modes of panels and lectures, of “let me tell you how smart I am” pontificating, of questions that are actually overlong statements. Instead, it tried to create a warmer, helpful environment of humble, accessible peer-to-peer teaching and learning. There was no preaching allowed, no emphasis on your own research or projects.

THATCamp was non-hierarchical. Before the first THATCamp, I had never attended a conference—nor have I been to one since my last THATCamp, alas—that included tenured and non-tenured and non-tenure-track faculty, graduate and undergraduate students, librarians and archivists and museum professionals, software developers and technologists of all kinds, writers and journalists, and even curious people from well beyond academia and the cultural heritage sector—and that truly placed them at the same level when the entered the door. Breakout sessions always included a wide variety of participants, each with something to teach someone else, because after all, who knows everything.

Finally, as virtually everyone who has written a retrospective has emphasized, THATCamp was fun. By tossing off the seriousness, the self-seriousness, of standard academic behavior, it freed participants to experiment and even feel a bit dumb as they struggled to learn something new. That, in turn, led to a feeling of invigoration, not enervation. The carefree attitude was key.

Was THATCamp perfect, free of issues? Of course not. Were we naive about the potential of technology and blind to its problems? You bet, especially as social media and big tech expanded in the 2010s. Was it inevitable that digital humanities would revert to the academic mean, to criticism and debates and hierarchical structures? I suppose so.

Nevertheless, something was there, is there: THATCamp was unapologetically engaging and friendly. Perhaps unsurprisingly, I met and am still friends with many people who attended the early THATCamps. I look at photos from over a decade ago, and I see people that to this day I trust for advice and good humor. I see people collaborating to build things together without much ego.

Thankfully, more than a bit of the THATCamp spirit lingers. THATCampers (including many in the early THATCamp photo above) went on to collaboratively build great things in libraries and academic departments, to start small technology companies that helped others rather than cashing in, to write books about topics like generosity, to push museums to release their collections digitally to the public. All that and more.

By cosmic synchronicity, WAAF also went off the air this week. The final song they played was “Black Sabbath,” as the station switched at midnight to a contemporary Christian format. THATCamp was too nice to be that metal, but it can share in the final on-air words from WAAF’s DJ: “Well, we were all part of something special.”

Humane Ingenuity 16: Imagining New Museums

David Fletcher is a video game artist in London who on the side creates hyper-realistic 3D photogrammetry models of cultural heritage sites and works of art, architecture, and archeology. I particularly like how he captures soon-to-be-obsolete aspects of the city he lives in, and our modern life, like the beautiful cab shelters for hackney carriage drivers:

(Cab Shelter, Russell Square)

Last week, instead of focusing on a building or piece of material culture, David focused on a person, and the results caught me off guard. He captured one of the few remaining mudlarks in London, Alan Murphy. Mudlarks dig through the shores of the Thames to find historical artifacts, an old and now mostly bygone hobby. David took over 200 high-definition photos of Alan, and processed them in Reality Capture (for Alan’s body and the surrounding landscape) and Metashape (for Alan’s head).

The model is so realistic and detailed that you can rotate it and even zoom in on what Alan has found in the mud:

Not sure about you, but I find this simultaneously unsettling—in an uncanny valley sort of way—and also moving—like a Dorothea Lange photograph. And it’s a strange flipside relative of the deep fake—a shallow real.

When they build a Museum of the Anthropocene, this very well may be one of the dioramas.


I have long admired the work and cleverness of George Oates, an interaction designer who cares deeply about libraries, archives, and museums, and has thought about how to further their mission through open web technologies. (She was behind Flickr Commons and Open Library, in addition to her stellar design work at Stamen and Good, Form & Spectacle.) So when George contacted me in a few years ago about her latest project, to create a small digital museum device, I was instantly in as a supporter.

That device is now out in the wild: Museum in a Box. Powered by a Raspberry Pi, each MIAB contains a special collection and stories or evocative audio about that collection, which you activate and hear by “booping” RFIDed items on top of it. Here’s a brief video showing how the Box works:

The Smithsonian now has 30 MIABs, and globally you can see what people are booping over at the Museum in a Box Boop Log.

(Side note: really looking forward to OED’s future definition and etymology of “boop,” after the influence of 2010s internet culture, including We Rate Dogs.)


A little follow-up from HI15 on the recovery of independent bookstores: Book historian Paul Hoftijzer spent decades researching the early book industry in Leiden, and Leiden University’s Centre for Digital Scholarship has now converted Hoftijzer’s paper records into data. Peter Verhaar recently gave a presentation based on this data, and two things struck me. First, look at how densely packed the booksellers were in the relatively small city of Leiden in 1700:

Second, Leiden also experienced a painfully familiar boom and bust in bookselling that is clear in the data:

We should enjoy those independent bookstores while we have them.


In HI1 I mentioned “hard OCR” problems as good examples of the potentially beneficial combination of advanced technology and human knowledge and expertise. Tarin Clanuwat and her colleagues at Japan’s ROIS-DS Center for Open Data in the Humanities have recently made significant advances in converting documents written in Kuzushiji, the Japanese handwritten script used for a millennium starting in the 8th century, into machine-readable text. As Tarin notes, this could potentially open up entire new research areas in history and literature, because even among Japanese humanities professors, fewer than 10 percent can read Kuzushiji. Currently most Kuzushiji documents have not been encoded and are not full-text searchable.

The technical paper from Clanuwat et al. is worth reading as well for the holistic approach they took to analyzing each Kuzushiji page.


You may have read “An Algorithm That Grants Freedom, or Takes It Away,” about the software that increasingly guides judges in their criminal sentencing and parole decisions, in the New York Times this weekend. Northeastern University researcher Tina Eliassi-Rad has been working on how to reframe and redesign those kinds of algorithmically determined (and often life-changing) automated processes. (I interviewed her about this on What’s New, episode 18.)

Some of her main points:

  • Only allow the machine to train on highly vetted, conscientiously assembled data sets that are independently verified for bias reduction.
  • User interfaces for algorithmically driven decisions should always show the mathematical confidence or probability levels of each element. (These are often absent.)
  • Show as much context as possible. All numbers must be framed so as to reduce the overly simplistic power of numerical scores. For sentencing, for instance, the interface should show other stories/cases so the current case is situated in a larger, more complex environment, rather than starkly graded against an invisible background of data.
  • Use ranges rather than points.
  • Add narrative wherever possible.

No, none of this makes the process anywhere near perfect or free from bias. An argument can and should be made that there are domains where AI/ML simply shouldn’t be used. But nota bene: this may simply revert those domains to more traditional forms of human bias. I find this whole topic disquieting and worthy of considerably more thought.

(See also: Leigh Dodds of the Open Data Institute has a related, interesting blog post this week: “Can the regulation of hazardous substances help us think about regulation of AI?”)


This week on What’s NewJoseph Reagle talks about his new book, Hacking Life: Systematized Living and Its Discontents. Joseph crystallized for me the basis for the life hacking movement, from Inbox Zero to Soylent: “Life hackers are the systematized constituency of the creative class.” When work is no longer 9-5 in a large company, but a 24/7 hustle of coding or writing or designing or anything else with little scaffolding, an obsession with productivity is a natural cultural byproduct. Tune in.

Humane Ingenuity 15: Close but Not Quite

The Picture Description Bot, by Elad Alfassa, runs random Wikimedia Commons images through Microsoft’s Computer Vision API, and then posts the best-guess caption that API produces along with the image to the bot’s Tumblr and Twitter feeds. This process was featured in HI3 for archival photographs, although I also included the API’s confidence scores for the caption and associated tags, which is helpful in any overall assessment.

The Picture Description Bot’s close misses are the most revealing and humorous:


Via Jen Serventi, Sheila Brennan, and Brett Bobley of the National Endowment for the Humanities, who posted from the Digging Into Data conference, two interesting search tools:

Dig That Lick lets you play some notes on a virtual keyboard, and then it finds similar melodic patterns in a large database of jazz performances.

I played the first six notes of “Hey Jude,” and it found dozens of instances of that lick embedded in the middle of jazz solos.

(Yes, in the Upside Down of contemporary copyright litigation, technology such as this is being used, in the wake of the “Blurred Lines” case, to analyze every new hit song for potential litigation. No, this is not good for pop music.)

ISEBEL, the Intelligent Search Engine for Belief Legends, is a search engine for folktales from northern Europe:

The focus of ISEBEL is mainly on orally transmitted legends: traditional stories about ghosts, hauntings, devils, witches, wizards, spells, werewolves, nightmares, giants, trolls, goblins and the like, as well as stories about hidden treasures, famous robbers, underground passages and sunken castles. Queries can be made in English, or in the languages the stories are in.

The stories are geolocated and visualized on a map. This helped me see that Danes are really into trolls (the hairy mythical kind, not the current online annoyances), and have many local stories about trolls throwing giant rocks around for sport (which of course explains the boulders in the center of some villages).


Fernando Domínguez Rubio and Glenn Wharton have published a very good article on the future of preserving digital art, “The Work of Art in the Age of Digital Fragility.” The article lays out better than I’ve seen elsewhere the entire preservation process for digital art and the serious problems that museums face. One case study is the Museum of Modern Art’s acquisition of an interactive video artwork called I Want You to Want Me, by Jonathan Harris and Sep Kamvar, which drew on live profile information from dating sites:

Initially, the acquisition of [I Want You to Want Me] followed the standard route museums use to acquire any other artwork. Once the legal paperwork was completed, the museum sent some “preparators”—the museum personnel specialized in moving artworks—to the artists’ studio to collect the custom-made monitor. After the monitor arrived at the museum, it underwent a routine condition assessment to determine whether there was any physical damage in it prior to being sent to its final destination in the museum’s storage facility in Queens.

This time, however, the routine inspection could not be completed. Despite their best efforts, the museum staff could not get the artwork to run on the monitor. No matter how hard they tried, the hard drive attached to the monitor did not produce any data from dating websites. It was only after several attempts that they finally realized that the problem was in fact not technical—since both the monitor and the hard drive were working perfectly—but, alas, ontological. In other words, the problem was not that the museum had acquired a malfunctioning object but that it had acquired a different kind of object. More specifically, the museum had acquired a “distributed object”…

Although Kamvar and Harris wrote the source code for the artwork, the music running in the background was produced by a Canadian band; the data filling the balloons was produced by anonymous users in online dating sites; the touch screen was made by a private company; while the operating system and software on which the artwork runs were produced by different companies.

In addition to these many issues, such preservation raises even larger philosophical questions about today’s art, because most digital art will have to be migrated to new platforms over time, which in a way isn’t preservation, but rather close-but-not-quite replication.

The Faustian bargain that digital offers to the museum: either we let these artworks die, or we keep them alive, but at the cost of embedding artworks in…an environment in which artworks can exist as both copies and originals, regenerated and authentic, past and present.


Speaking of authenticity, although I am not a regular (or even occasional) reader of Harvard Business School case studies, I suspect that HI subscribers might be interested in Ryan L. Raffaelli’s “Reinventing Retail: The Novel Resurgence of Independent Bookstores,” in which he outlines factors in the phoenix-like rise from near-death of local booksellers. I don’t think there’s much surprising in Raffaelli’s analysis, but it’s a good summary of the conventional wisdom, backed by some data and many interviews with successful bookstore owners.

I found this section on human vs. AI recommendations particularly cogent for this newsletter:

Online shopping platforms present consumers with seemingly unlimited inventory. However, research suggests that consumers can become overwhelmed when presented with too many options and seek guidance on how to narrow their choices.

Rather than stocking larger inventories, indie booksellers have mastered the art of “handselling” books that are uniquely tailored to specific tastes of the readers who most frequent their stores. The practice of handselling involves an expert bookseller asking the consumer a series of questions about their recent reading habits, then handing them the “perfect” book (often an unexpected hidden gem not found on popular bestseller lists). To accomplish this task, independent bookstores employ talent who are themselves voracious readers and possess deep knowledge and passion for books. Consequently, booksellers serve the role of matchmaker between a customer and each book on the shelves in the store.

They try to expose readers to up-and-coming authors before anyone else, or steer the reader into genres he or she might not venture into without expert guidance. Booksellers keep an ear to the ground for soon-to-be- bestselling books by monitoring the reading habits of visiting authors, publishers, and their most loyal customers. While artificial intelligence and algorithms are becoming the norm to help retailers anticipate consumer buying behaviors, indie bookstores have been able to counter this trend by offering a unique personal buying experience where the consumer enters into a relationship with a bookseller, often over a series of ongoing conversations about their evolving reading preferences. Artificial intelligence-based algorithms have yet to fully replicate the human experience associated with the art of handselling that successful independent booksellers have mastered.


Humane Ingenuity 14: Adding Dimensions

The Library of Necessary Books. An art installation in Singapore where visitors can leave their favorite books. (Via Seb Chan’s newsletter.)


In HI12 I mentioned Ben Shneiderman’s talk on automation and agency, and he kindly sent me the full draft of the article he is writing on this topic. New to me was the Sheridan-Verplank Scale of Autonomy, which, come on, sounds like something straight out of Blade Runner:

In all seriousness, as Ben notes, scales like these reinforce an unhelpful mindset in which there is a unidirectional spectrum between human and machine agency, and a sense that progress moves from human control to AI running everything.

If you look around you can now see these charts everywhere, and they are dominating many of our conversations about emerging technology. Here, for instance, is the SAE standardized levels for automated vehicles:

Note especially the dark blue line zigzagging on the right side—that’s the gradual transfer of agency from human to machine.

Our goal should be to add dimensions, context, and complexity to these unidimensional scales. The best outcomes will be ones that enable humans to do new and better things with the assistance of—not the replacement by—autonomous machines.


Henrik Spohler takes photographs of the midpoints of global commerce. (Container terminal, Rotterdam Harbor, The Netherlands, 2013. Henrik Spohler, Audiovisual Library of the European Commission, CC BY-NC-ND. Via Europeana‘s new exhibition of “Social and Employment Realities” in the contemporary world.)


In HI13, I discussed Ian Milligan’s survey of historians’ research practices, and their near-universal reliance on the smartphone camera. Alexis Madrigal has a good follow-up piece in The Atlantic about this.

In this space I should have expanded on why the practice of mass archival photography might change what we historians write, not just how we do our work; Alexis helpfully captures some of this:

There’s some precedent for how history has been changed by increasing digital accessibility. Wellerstein groups photo-taking in the archives under a broader set of changes that he terms “high volume” research methods. “The practices will change what kind of questions you’ll ask,” he said. Take a highly regarded book, Charles Rosenberg’s The Cholera Years. In it, Rosenberg tracks how three newspapers in New York covered cholera. “He spent years working on that,” Wellerstein said. “You can call up every source he used in that book in one afternoon using ProQuest,” one of several databases of newspapers.

That does not invalidate the book, which Wellerstein described as “great,” but someone working on the same topic now would have the option to expand the field of inquiry. “You might look nationally, internationally, look over a vast amount of time, correlate cholera with something else,” he said. “Would you get better history? I don’t know. You’d get different history though.”

As is often the case, a good starting point for thinking along these lines is Roy Rosenzweig’s now-classic essay, “Scarcity or Abundance? Preserving the Past in a Digital Era,” which forecast a future of either very few primary sources, or so many that we would have difficulty managing it all. (Twenty years later, we’ve ended up with the latter.)

Humane Ingenuity subscriber John Howard, the University Librarian at University College Dublin, responded to HI13 with their setup for better archival smartphone photos (for both staff and visiting researchers), including the ScanTent:

Portable tent + LED lighting + platform for your smartphone. I need one of these.


On the latest What’s New podcast, I talk with Iris Berent about her forthcoming book The Blind Storyteller: How We Reason about Human Nature. Those who liked Daniel Kahneman’s Thinking, Fast and Slow will really enjoy Iris’s book (and our conversation), since it exposes other core elements of thinking, shows how innate many concepts are, and reveals why we have such trouble thinking about our own minds. There are now many fascinating studies of infants that imply that babies know much more than previously believed, and this possibility of considerable innate knowledge can be difficult to accept, since we think of ideas as ethereal and acquired over time rather than physical and DNA-like. Can month-old babies figure out physics or ethics, and if so, how? Tune in.


Laura Ben Hayoun’s photography highlights the presence of gig workers in public spaces. (December 2016. Paris. Bike delivery person. Laura Ben Hayoun, Audiovisual Library of the European Commission, CC BY-NC-ND. Also from Europeana’s “Social and Employment Realities” exhibition.)

Humane Ingenuity 13: The Best of Both Worlds

Happy New Year, and welcome to 2020! My constant reminder of the passage of time is a small lake near where we live, which transforms itself delightfully month by month, season by season.

Several months ago, it was the canonical image of autumn; now, it is a crisp winter scene.

Like several bodies of water in the Boston area, the lake was given a new name in the late nineteenth century to more attractively brand the ice that was commercially harvested from its frozen top in the winter. (For the curious, it went from faith to mammon: Baptism Pond to Crystal Lake.) That ice was put on railroad cars and boats and sent to remote, hotter locations, covered and preserved in the natural insulation of sawdust. Massachusetts ice thus ended up in refreshing tropical drinks. (You can listen to this story on an episode of 99% Invisible.)

So I send a chilled and tasty beverage to all of the global subscribers to Humane Ingenuity. May you have a good 2020, and may the 2020s bring us happier days.


The Best of Both Worlds

Last week on social media I linked to an important survey by Ian Milligan that turned out to be interesting bit of professional anthropology, and that for the purposes of this newsletter reveals how new technology can enter our lives, change it fairly rapidly without reflection, and then polarize us into antagonistic camps.

Ian surveyed historians in Canada about their research practices in archives and special collections, and discovered that what historians mostly now do is stand over documents taking photographs with their phone. Many, many photographs. The 250 historians he surveyed snapped a quarter-million photographs in their recent archival trips. Almost everyone in his survey has adopted this new rapid-shoot practice, and 40% took over 2,000 photos while doing their research.

What has happened over the last decade is a massive and under-discussed shift: historians now spend less time in the reading rooms of archives and special collections reading; the time they spend there is mostly dedicated to the quick personal digitization of materials that they will examine when they return home.

I noted this without any spin, but of course on social media I was accused of being nostalgic or worse. I also received many messages starkly in favor of the new practice and some starkly against it — with little sentiment in between. Similarly, I heard about many archives where this practice is not allowed (and the hate directed at those institutions), and some others where it is encouraged, and many others where it is tolerated.

For what it’s worth, I actually think that the new practice is neither better or worse than the old practice, but it is vastly different. My main concern is that we haven’t fully thought through what the change means, or the effect it has on the actors involved — it simply, and perhaps unsurprisingly, just happenedwith the proliferation of phone cameras, in the same way that we have experienced other rapid technological changes without much consideration. (Thus the common lament of so many end-of-the-2010s pieces about smartphones and digital media and technology upending social conventions in unexpected ways.)

So historians-as-amateur-digitizers is a case study of new technology changing our practices without much forethought about what it might mean — in this case for historical research — or what externalities it might entail. And more importantly, we haven’t thought much about how to mitigate the negative aspects of this practice, or accelerate the benefits it provides.

We should pause to consider:

  • Intellectually, how does the new practice change the history that is written (or no longer written), the topics selected and pursued (or no longer selected and pursued), and our relationship to the documentary evidence and its use to support our theories? What happens when instead of reading a small set of documents, taking notes, thinking about what you’ve found, and then interactively requesting other, related documents over a longer period of time, you first gather all of the documents you think you need and then process them en masse later? 
  • Labor-wise and financially, for the researcher, it means less time away from home, and a lower total cost for travel. That can be a net positive; it might also lead to decreased funding for travel, a downward spiral, as funding agencies get wind of what’s really done at cultural heritage sites. The practice might very well democratize the practice of history, a net good. For archivists, the practice means more retrievals of boxes and folders in a much shorter period of time, and probably some concerns about the careful handling of primary source materials. Despite some protests I heard online, I do think it is reshaping the interactions between researchers and archival staff, and how each views the other, and probably not in a net-positive way.

I could go on; these points and many others were identified and described well by those looking at Ian’s survey. In short, the work that needs to be done is not just to fully recognize and account for a major shift in historical research practice; it is to figure out how to optimize what’s going on so that history is both democratic and thoughtful, and so that it maintains a healthy and productive relationship between researchers and archivists. In general, we need to do a better job getting ahead of these technology-based shifts, rather than criticizing them or lauding them after the shifts have occurred.

Without being nostalgic, I think the social and personal aspects of longer interactions in archives, between archival staff and researchers and between fellow researchers, can be helpful. And without being futuristic, I think the new photo-and-leave practice has some helpful effects on researcher work-life balance and the ability of those without big research grants to do full-fledged analyses.

But back to the driving theme of this newsletter: What can we do to promote both the advantageous social aspects of the old methods and the advantageous digital aspects of the new methods? Asking that question leads to other useful questions: How can we encourage other types of researcher communities that inevitably surround certain special collections? How can we foster better communications between historians and archivists? How can we improve amateur photos without disrupting the environment of the reading room? How can we share scans more widely, rather than having them reside in personal photo collections? And as someone who oversees a library and archive, are there new services we should provide?


Tools to Make Research Better

The Roy Rosenzweig Center for History and New Media has tried to address these issues since the 1990s, and RRCHNM and the RRCHNM diaspora continue to explore what can be done to create our own tools and methods that keep in mind traditional strengths while using novel techniques. Tropy, the open source tool for storing and curating research photographs, spearheaded by Sean Takats, is one critical piece of this potential future infrastructure. Omeka, led by Sharon Leon, for displaying those collections online, is another.

And adding another piece to the puzzle, last week Tom Scheinfeldt and his colleagues at the University of Connecticut’s Greenhouse Studios launched Sourcery, an app to enable any researcher to request the remote photographing of archival materials. (Sourcery is a great name.) Maybe the pieces are starting to come together.

(Full disclosure: Tom, Sharon, Sean, and I all worked together at RRCHNM, and manage a not-for-profit entity that coordinates these projects. But I link to these projects because you should know about them and they are good, not because I’m biased. Ok, I might be slightly biased, but it is my newsletter.)


Fairer Use

On the first What’s New podcast of 2020, I talk to Jessica Silbey about her forthcoming bookAgainst Progress: Intellectual Property and Fundamental Values in the Internet Age. Jessica powerfully challenges the idea that copyright is still working “to promote the progress of science and useful arts,” as the famous phrase from the U.S. Constitution puts it. Instead, she thinks that the coming decade requires a reassessment of IP law that looks at the broader social impact of copyright. Her notion of “fairer use” — not “fair use,” but a wider concept that takes into account multiple stakeholders, and that will allow for new kinds of artistic and scientific advancement — is worth listening to. Please do tune in.

Humane Ingenuity 12: Automation and Agency

In this issue of HI: dispatches from the frontiers I traversed at the fall meeting of the Coalition for Networked information.


Automation and Agency

Ben Shneiderman, one of the pioneers in human-computer interaction and user interfaces, gave a fascinating, thought-provoking, and very HI-ish talk on human-centered artificial intelligence. I will likely write something much longer on his presentation, but for now I want to highlight a point that harmonizes with the note on which I started this newsletter: seeking ways to turn the volume up to 11 on both the human and tech amps. 

Ben asked the audience to reconsider the common notion that there’s a one-dimensional tug of war between human control and computer automation. For instance, we see the evolution of cars as being about the gradual transfer of control from humans to computers along a linear spectrum, which will end in fully autonomous vehicles.

This is wrong, Ben explained, and it puts us in the unhelpful mindset of complete opposition between human and artificial intelligence. Instead, we should create tools in the coming decades that involve high levels of automation and high levels of human control. Upon reflection, we can actually imagine a two-dimensional space for technology, where one axis is the level of human control vs. the computer, and another axis is the level of automation:

Ben’s thrust here pushes away from technologies such as self-driving cars without steering wheels, humanoid robots, or algorithms that replace humans and our vocations. Instead, by looking at the upper right corner, he seeks systems that greatly expand our creative potential while maintaining our full agency: as Ben put it, let’s “amplify, augment, enhance, and empower people” through artificial intelligence and automation.

This newsletter has been cataloging examples that fit into that theory, and Ben had many others from a variety of domains and disciplines. An obvious example in widespread use today is the new software-assisted digital camera apps that use machine learning to improve nighttime photos—but allow you do the composition and choose the moment to click the button.

Again, more on this in future HIs. For now, if you would like to see additional good examples of high automation + high control, from a machine-assisted interface for professional translators to Mercedes-Benz’s new parallel parking system, Ben referenced Jeffrey Heer’s recent article in the Proceedings of the National Academy of Sciences, “Agency plus automation: Designing artificial intelligence into interactive systems.”)


Welcome to the Dystopia

From the Black Mirror universe of Inhumane Ingenuity, some seeds for great dystopian science fiction (if they weren’t already true and here):

  • Jason Griffey highlighted that there are already three web apps that use AI-based text generators to create essays for students from their thesis statements, and other AI-based services that suggest relevant articles for references and footnotes. As several people simultaneously chimed in, throw in an AI-based grading tool and we can remove students and teachers completely from the educational system.
  • Cliff Lynch revealed that there are agencies and institutions that are archiving encrypted internet streams and files right now, so that when quantum computing unlocks today’s encryption, they can go back and decrypt all of the older traffic and files. So what you’re doing right now, using encryption, may only be temporarily safe from prying eyes.
  • Cliff also lamented that we are at risk of being unable to preserve an entire generation of cultural production because of the shift to streaming services without physical versions—libraries can’t save Netflix films and shows, for instance, as they are not available on media like DVDs.
  • And the final item in Cliff’s trio of worries: the digitization of archives and special collections, once seen as an unmitigated good, may lead to facial recognition advances and uses (such as surveillance) that we may regret.
  • Kate Eichhorn, author of the book The End of Forgetting: Growing Up with Social Media, noted that when her daughter was 13, she signed up for a LinkedIn account (!), because she heard that LinkedIn was search-engine optimized so that it would appear first in the search results for her name. She didn’t want her other social media accounts, or tags of her from her friends’ social media accounts, coloring the Google results for future admissions officers or employers. In her CNI keynote, Kate said that as a media scholar she didn’t think it was helpful to have a moral panic over how social media is shaping the experience of today’s youth, but in listening to how kids feel anxious and constrained by an omnipresent digital superego, I wondered if, for once, it is justified to have a moral panic over new media and kids: social media does seem qualitatively and quantitatively different than prior subjects of panics over teen consumption, such as comic books, heavy metal, or video games.

Some Happier Case Studies

Starting in 2010, Ákos Szepessy and Miklós Tamási began collecting old, discarded photographs they found on the streets of Budapest. Then they invited the public to submit personal and family photos. Fortepan (named after a twentieth-century Hungarian brand of film) now hosts over 100,000 of these photos from the last century. I always loved seeing this kind of longitudinal social documentation when I was at the Digital Public Library of America.

The spirit spread: The University of Northern Iowa took inspiration from Fortepan and created a similar site for Iowans, with thousands of personal photos stretching back to the American Civil War. Then they did something wonderful to return the digitized photographs to the physical world: UNI took some of the photos and plastered them on the very buildings in which the shots were taken many decades earlier.

A century ago, astronomy photographs used to be taken on large glass plates, and rare events in the night sky, such as novae, might have been captured in ways that would help astronomers today. University of Chicago librarians are now digitizing and extracting astronomical data from hundreds of thousands of these glass plates, and then using computational methods to align them precisely with contemporary scans of the sky. (Yes, there’s an API that will take your photo of the stars and provide the exact celestial coordinates.)

This is all terrific, but there is a cautionary (and somewhat amusing) tale lurking on the side. We academics always like to think that people will remember us for our enthralling teaching, breakthrough discoveries, or creative ideas. Maybe we will have the good fortune to have a famous theory, or a celestial body, named after us. But our fate can just as easily be that of Donald Menzel, who was director of the Harvard Observatory in the 1950s and 60s. To save money, he stopped the production and preservation of glass plates for some years, and so now there is a missing section in the historical astronomical record. It is called, with a librarian’s tsk-tsk, the “Menzel Gap.” Ouch.


Briefly Noted

Thomas Padilla has a white paper out on artificial intelligence/machine learning in libraries, from OCLC: “Responsible Operations: Data Science, Machine Learning, and AI in Libraries.” It has many helpful suggestions.

The Council of Library and Information Resources launched its first podcast, Material Memory:

Material Memory explores the effects of our changing environment—from digital technologies to the climate crisis—on our ability to access the record of our shared humanity, and the critical role that libraries, archives, museums, and other public institutions play in keeping cultural memory alive.

Also highly recommended.

Humane Ingenuity 11: Middle-Aged Software

The National Gallery of Denmark has a nicely designed new website that makes all of their digitized artworks openly available, and about two-thirds downloadable under a public domain declaration. The rest is under copyright but can still be downloaded at a generously high resolution and can be used for non-commercial purposes, like this newsletter. Hence: Henning Damgård-Sørensen’s “Maleri VI, 2004,” above. They also have an API and multiple ways to search the collection, including by color. So go on and add a rotating series of paintings to your website that match its palette exactly.


Middle-Aged Software

The novelist John Green recently reviewed the iOS Notes app on his podcast The Anthropocene Reviewed, and what I loved about it was how it focused less on the app itself and more about what he has done with it over the decade he has been using it. He has grown older and so has the app—as his sideburns grayed, Notes lost its leathery skeuomorphism—and Green has built up a stable, useful relationship with the software, mostly opening it to scribble down interesting lines that occur to him, or that are spoken to him, to use later on in his writing.

The review got me thinking about technology over time. We always think of technology as new, but inevitably some of the technology we use ages along with us, becomes old, and we rarely reflect on what that means, and especially what it might entail for how we imagine and develop the next generation of technology.

These newsletters you have been reading have been written for the most part in Markdown in BBEdit, my preferred text editor since the 1990s. We’ve known each other for a while now. In software-years, BBEdit is, like me, middle-aged. I have a half-joking mental model about the age of software, which is roughly human-years divided by two:

  • 0-10 years old: newborn and youthful software—still finding its way in the world and trying out new features, constantly seeking coolness, a bit clueless and sometimes wild
  • 10-20 years old: early adult software—hitting full stride with a surer sense of what it is, but still with occasional bouts of obnoxiousness and anxiety
  • 20-35 years old: middle-aged software—still active if perhaps a little tired, stable and productive and no longer so interested in big changes, generally uncool but doesn’t give a damn what you think anymore
  • 35-50 years old: “golden years” software—etched with the lessons of time and decades of use, contains much encoded wisdom, can project a “these kids today!” vibe even without intending to

Despite the tongue in cheek, this is, I hope, a not unuseful rubric, especially when you think of software that falls into these categories:

  • 0-10 years old: TikTok, Snapchat
  • 10-20 years old: Facebook, Twitter, iOS, WordPress
  • 20-35 years old: Microsoft Office, Photoshop, the web browser
  • 35-50 years old: Emacs, vi, email

Software that makes it to middle age and beyond has a certain hard-won utility, and an audience that has found a way to profitably and consistently make use of it, despite the constant beckoning of younger software. We’ve worked out accommodations with any frailties within the code or interface, and have invested time in the software-human relationship that is rewarded in some way.

It is worth reflecting on what makes software survive and age gracefully, as I tried to do last year in a reassessment of that good ol’ geriatric, email. This should not be an exercise in nostalgia; it should be a careful analysis about what makes older software tick, and makes us in turn stick with it. I suspect that some of the elements we will discover in this review are human-centered principles that it would be good to revive and strengthen.


In 2010, the Chronicle of Higher Education asked two dozen scholars “What will be the defining idea of the coming decade, and why?” I wrote that Facebook would end up having more users than the population of China, and that giant social networks, with their madding crowds, would provoke a reaction:

Just as the global expansion of fast food begat the slow-food movement, the next decade will see a “slow information” counterrevolution focused on restoring individual thought and creativity. 

And here we are a decade later, and we’re still hoping for the same thing. Maybe next decade?


On this week’s What’s New podcast, the topic is a difficult but incredibly important one: how growing inequality is having a troubling effect on the mental health of the disadvantaged and marginalized. Alisa Lincoln lays out the many issues that contribute to poor mental health outcomes, and she suggests some potential interventions that aren’t app-based, but that instead focus on social context and (especially) education. I hope you’ll tune in.

Humane Ingenuity 10: The Nature and Locus of Research

It’s getting to be that time of the semester when extracurricular activities, like writing this newsletter, become rather difficult. My day job as a university adminstrator has many to-dos that crescendo in November; I will not trouble HIers with most of these, although I’ve also been on a special detail this fall co-chairing an initiative to highlight and expand our efforts to combine technical/data skills with human skills, about which I will write in this space in due time. It’s very much in the spirit of Humane Ingenuity.


Desakyha,” Artist unknown, Cornell Ragamala Paintings Collection.

From Cornell:

Ragamala is a unique form of Indian painting that flourished in the regional courts of the Indic world from the 16th through the 19th centuries. The term translates as a garland, mala, of ragas, meaning melodic types or tonal frameworks. Ragamala painting combines iconography, musical codes, and poetry to indicate the time of day or season appropriate to the raga and its mood.


Follow-up on GPT-2

Point: HIer Hillary Corbett noted one potentially problematic use for GPT-2 in the academy: In the constant push for more publications (encouraged, I should note, by increasingly quantified assessment of faculty research activity in many countries), researchers could use GPT-2 to generate plausible articles from fairly modest seed text. Hillary took a few lines from a chapter she wrote and got generally acceptable completion text. (Associated thought: the Sokol Hoax as an artisanal pre-GPT-2 scholarly communication deep fake.)

Counterpoint: There is now a Chrome extension that identifies GPT-2-generated text.

Again, my interest in GPT-2 has less to do with the technology than with the powerful human propensity to respond to, and often uncritically accept, expressions that fit into genres. We are genre-seeking creatures, and GPT-2 highlights a cultural version of our basic urge to fit things into categories (and also, alas, to stereotype).

I could have just as easily focused on music. For instance, earlier this year, Endel became the first AI-based generative music system to sign a deal with major record label. Like GPT-2, Endel takes music seeds and grows new music based on conforming genre norms. Since music, perhaps more than any other form of human expression, relies on repetition and slight modifications from prior art, musical genres can have an even more powerful attraction to the listener than textual genres to the reader. (Just think about music today: the reggaeton beat has powered a dozen huge hits in the last few years.)

I’ll leave the last word on GPT-2 and its ilk to Janelle Shane (with appreciation from this Victorianist for the conclusion):

One of the disadvantages of having a neural net that can string together a grammatical sentence is that its sentences now can begin to be terrible in a more-human sense, rather than merely incomprehensible. It ventures into the realm of the awful simile, or the mindnumbingly repetitive, and it makes a decent stab at the 19th century style of bombastic wordiness.


The Nature and Locus of Research

One of the big issues in academia right now is the shift of much of the research in areas this newsletter has covered, such as machine learning, to the private sector. There are many reasons for this, but the main ones are that the biggest data sets and the most advanced technology are now at companies like Facebook and Google, and also these companies pay researchers far more than we can in regular faculty or postdoc positions.

This has made it increasingly hard to find and retain faculty to teach the next generation of students in many topics that are in high demand. What I want to focus on here, however, is its troubling effect on the nature of research. Corporations have always had research centers, of course, from which incredible innovations have arisen; just think about Bell Labs or Xerox PARC. Since the Second World War, there has always been a place for someone like Claude Shannon to ride through corporate hallways on a unicycle thinking about information theory, and to lay the groundwork for our modern world.

But these corporate research spaces have become much more mercenary and application-oriented in the last decade. Google’s Director of Research, Peter Norvig, perhaps the archetype of the academic who left academia because (as he once put it) he had to go where the data was, is always sure to highlight that he doesn’t want to replicate Bell Labs’ or Xerox PARC’s slightly clueless abstraction, even if great things eventually emerged from those institutions. He wants Google research to lead to new businesses and more uses of Google’s search engine (even if indirectly).

Which is totally fine. But by drawing researchers fully out of the academy, we lose not only teachers and mentors, but a style of thinking and research that is different in important ways. An example: Last week on the What’s New podcast I interviewed Ennio Mingolla, a scholar of human and computer vision. Ennio is brilliant, and undoubtedly would be a highly valued researcher in, say, an autonomous vehicle startup. Yet he retains academia’s more expansive approach to thinking and research, in a way that is likely to be much more helpful, over time, to understanding vision.

On the podcast, Ennio and I discussed philosophy and art—knowledge from the distant past and from non-digital realms—just as much as the latest computational approaches to “seeing.” We touched on empiricism, Leonardo da Vinci’s discoveries in sketching and painting, and William James—not because we’re fancy academics but because those topics present essential and varied theoretical approaches to the subject of vision. Freed from the right now and the near future, we can explore the ideas of those from the past who had also thought deeply about seeing, and how those concepts very well may present a helpful framing for contemporary work in the field.

Ennio is an expert in figure-ground separation, the human ability to make out an object from the scene behind it. This is a critical survival and social skill (noticing a lion in the tall grass, paying attention to faces in a crowd), and extraordinarily complicated. It’s also directly related to what self-driving cars need (noticing a pedestrian in the crosswalk, paying attention to other objects in the terrain ahead). By considering vision not as a GPU-intensive task involving pixels and frames from a digital camera or LIDAR, but as a complex set of systems and skills networked in the brain, Ennio and his Computational Vision Labare developing a much richer (and I believe more accurate) understanding of how we see. This may take decades; it has taken decades to understand even some basic visual skills such as how we sense that something is approaching us quickly (which, as Ennio notes, is a process that is nothing like what you think it is, and is both faster and slower than a computer).

Universities also have scaffolding for research that most companies don’t. Institutional review boards, for instance, try to ensure that research doesn’t hurt people or have unintended consequences. IRBs can be annoying friction—ask any academic researcher—but given what has happened in our world with the use of personal data over the last few years, maybe we need those brakes more than ever.

There used to be an imperfect but useful pathway for research to move from the academy to the corporate world through tech transfer. That pathway has been disrupted by the tech/data/salary gap and the fact that it’s hard to find a way to share tech/data/salary between corporations and the academy. On the data front, initiatives like Social Science One, which was established to share large data sets between entities like Facebook and academic researchers, are floundering as Facebook and other giant companies hunker down in the face of criticism about privacy and their social effects. Sharing faculty between academia and corporations (in roles like affiliated, non-tenure track faculty) can be tricky to get right. Facebook, for example, only allows employees to spend 20% of their time at a university in such a role, and you can imagine which side has priority in the case of any important matter.

We need to find some new models that allow for the permeability of academia, for new kinds of partnerships, while retaining what makes thoughtful, deep academic research so critical over time. From a dean’s perspective this is of some urgency, but from a social and scholarly perspective, I think it hasn’t been addressed nearly enough, and will greatly affect the kinds of research and the style of research that is done in the future. And also, in the long run, limit the knowledge we produce and value.


Todi,” Artist unknown, Cornell Ragamala Paintings Collection.


The Enchantment of Archaeology Through Computers

HIer Shawn Graham, mentioned in HI8, kindly sent me a full draft of his forthcoming book, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence. I haven’t had a chance to read the whole thing yet, but plan to do so over the winter break. A taste of what Shawn explores in the book:

What is more rational than a computer, reducing all phenomena down to tractable ones and zeros? What is more magical than a computer, that we tie our identities to the particular hardware or software machines we use?…Archaeology, as conventionally practiced, uses computation to effect a distancing from the world; perhaps not intentionally but practically. Its rituals (the plotting of points on a map; the carefully controlled vocabularies to encode the messiness of the world into a database and thence a report, and so on) relieves us of the task of feeling the past, of telling the tales that enable us to envision actual lives lived. The power of the computer relieves us of the burden of having to be human.

An enchanted digital archaeology remembers that when we are using computers, the computer is not a passive tool. It is an active agent in its own right (in the same way that an environment can be seen to be active)…In that emergent dynamic, in that co-creation with a non-human but active agent, we might find the enchantment, the magic of archaeology that is currently lacking in archaeology. 


Citizen DJ

Finally, it was neat to see that the Library of Congress has given Brian Foo a residency for 2020. Brian was behind the very creative Data-Driven DJ project, and he will be building something called “Citizen DJ” at the LC—“an application enabling anyone with a web browser to create hip hop music with public domain audio and video materials from the Library’s collections.”

Humane Ingenuity 9: GPT-2 and You

Carlotta Corpron (1901-1987) was an artist and photographer who used light to create abstract works from which figures would sometimes emerge. (“Strange Creature of Light,” 1948, via the International Center for Photography.)


This newsletter has not been written by a GPT-2 text generator, but you can now find a lot of artificially created text that has been. For those not familiar with GPT-2, it is, according to its creators OpenAI (a socially conscious artificial intelligence lab overseen by a nonprofit entity), “a large-scale unsupervised language model which generates coherent paragraphs of text.” Think of it as a computer that has consumed so much text that it’s very good at figuring out which words are likely to follow other words, and when strung together, these words create fairly coherent sentences and paragraphs that are plausible continuations of any initial (or “seed”) text.

If you are new to GPT-2, the best way to see it in action is on Adam King’s “Talk to Transformer” website, which just implemented the full-sized GPT-2 model, and allows you to prompt it with any text you wish. I put in the first two sentences of Jane Austen’s Sense and Sensibility (in bold) and the computer spat out the rest:

The family of Dashwood had long been settled in Sussex. Their estate was large, and their residence was at Norland Park, in the centre of their property, where, for many generations, they had lived in so respectable a manner as to engage the general good opinion of their surrounding acquaintance. They were also well known for their hospitality and philanthropy, especially for their charitable donations of meat and milk to the poor. The Dashwoods were not, in fact, known to have any money, but had an excellent credit with all classes, and were well looked upon by their neighbors. The Dashwoods were a most benevolent family. Their land was large, and their possessions were in good order, and the house well kept. They had four children, namely, John, Sarah, John and Mary. These four children were all educated, and all well known for their benevolence and philanthropy, so that their name is familiar in…

You get the picture. If a bit dull, it’s still rather impressive, and certainly a major advance over prior text generators of this sort. (Austen would have had a field day picking up this thread and writing about two Johns fighting over their common name and potential spouses.) The web is now filled with examples using GPT-2, including the New Yorker‘s computational autocomplete of paragraphs in its article on predictive text (how meta).

The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.

OpenAI worried earlier in this year that GPT-2 might become a troubling anarchy loosed upon the world, and while we surely would like to avoid that, it’s not what I want to focus on in this issue of the newsletter. (If you are concerned about GPT-2 and devious trickery, please note that other AI researchers are working on countervailing tools to identify “fake” text, by using GPT-2’s strength—its statistical reliance on the common words that follow other words—against it in a nifty jiu-jitsu move.)

I’m actually less interested in whether GPT-2 has achieved some kind of evil genius, and more interested in what is really happening in the interaction between its generated texts and the reader. Before we worry about GPT-2’s level of intelligence, we should remember what occurs during the act of reading, and why we appreciate fiction in the first place. And that has much more to do with the activity in our own minds than the mind of the author, however real or fake.

We bring to bear on any text all of our prior experience and emotions, as well as everything we have read and thought. We complete a text, no matter how coherent it is; we fill in any blanks with what we believe should be there, or through our imagination. We ourselves are a preprocessed, mammoth, unique corpus, a special composite lens that colors what our senses encounter. 

From this perspective, GPT-2 says less about artificial intelligence and more about how human intelligence is constantly looking for, and accepting of, stereotypical narrative genres, and how our mind always wants to make sense of any text it encounters, no matter how odd. Reflecting on that process can be the source of helpful self-awareness—about our past and present views and inclinations—and also, some significant enjoyment as our minds spin stories well beyond the thrown-together words on a page or screen.


[Carlotta Corpron, “Light Creates Bird Symbols”]


My thanks to the HI subscribers who responded to my question about whether I should include some discussion of our library renovation in this space. The unanimous sentiment was yes, so I’ll drop some bits in here from time to time. (I have also discovered that two members of this list are also directors of large university libraries that are beginning renovations; helpful to compare notes!)

One complicated topic for us, and undoubtedly others, has been flexible space for focused study, collaboration, and creative production. Because the square footage of libraries is finite (with the obvious exception of Borgesian libraries), you often need to design spaces for multiple uses, especially in a library that (like ours) is open 24/7 and thus goes through different cycles of use as day turns into night and back again.

I have looked at a number of flex spaces in libraries, and I’m not sure that we have solved for this problem. Many approaches seem a bit immature, with an overreliance on movable furniture. Some of the more interesting conversations I’ve had with architects recently have focused not on physical elements like furniture but on more ethereal elements like shifting acoustic design and phased lighting. If you know of a space—in a library or anywhere else—that has worked well for different uses, I’d love to see it.


[Carlotta Carpron, “Patterns in a Glass Cube”]