Humane Ingenuity 12: Automation and Agency

In this issue of HI: dispatches from the frontiers I traversed at the fall meeting of the Coalition for Networked information.


Automation and Agency

Ben Shneiderman, one of the pioneers in human-computer interaction and user interfaces, gave a fascinating, thought-provoking, and very HI-ish talk on human-centered artificial intelligence. I will likely write something much longer on his presentation, but for now I want to highlight a point that harmonizes with the note on which I started this newsletter: seeking ways to turn the volume up to 11 on both the human and tech amps. 

Ben asked the audience to reconsider the common notion that there’s a one-dimensional tug of war between human control and computer automation. For instance, we see the evolution of cars as being about the gradual transfer of control from humans to computers along a linear spectrum, which will end in fully autonomous vehicles.

This is wrong, Ben explained, and it puts us in the unhelpful mindset of complete opposition between human and artificial intelligence. Instead, we should create tools in the coming decades that involve high levels of automation and high levels of human control. Upon reflection, we can actually imagine a two-dimensional space for technology, where one axis is the level of human control vs. the computer, and another axis is the level of automation:

Ben’s thrust here pushes away from technologies such as self-driving cars without steering wheels, humanoid robots, or algorithms that replace humans and our vocations. Instead, by looking at the upper right corner, he seeks systems that greatly expand our creative potential while maintaining our full agency: as Ben put it, let’s “amplify, augment, enhance, and empower people” through artificial intelligence and automation.

This newsletter has been cataloging examples that fit into that theory, and Ben had many others from a variety of domains and disciplines. An obvious example in widespread use today is the new software-assisted digital camera apps that use machine learning to improve nighttime photos—but allow you do the composition and choose the moment to click the button.

Again, more on this in future HIs. For now, if you would like to see additional good examples of high automation + high control, from a machine-assisted interface for professional translators to Mercedes-Benz’s new parallel parking system, Ben referenced Jeffrey Heer’s recent article in the Proceedings of the National Academy of Sciences, “Agency plus automation: Designing artificial intelligence into interactive systems.”)


Welcome to the Dystopia

From the Black Mirror universe of Inhumane Ingenuity, some seeds for great dystopian science fiction (if they weren’t already true and here):

  • Jason Griffey highlighted that there are already three web apps that use AI-based text generators to create essays for students from their thesis statements, and other AI-based services that suggest relevant articles for references and footnotes. As several people simultaneously chimed in, throw in an AI-based grading tool and we can remove students and teachers completely from the educational system.
  • Cliff Lynch revealed that there are agencies and institutions that are archiving encrypted internet streams and files right now, so that when quantum computing unlocks today’s encryption, they can go back and decrypt all of the older traffic and files. So what you’re doing right now, using encryption, may only be temporarily safe from prying eyes.
  • Cliff also lamented that we are at risk of being unable to preserve an entire generation of cultural production because of the shift to streaming services without physical versions—libraries can’t save Netflix films and shows, for instance, as they are not available on media like DVDs.
  • And the final item in Cliff’s trio of worries: the digitization of archives and special collections, once seen as an unmitigated good, may lead to facial recognition advances and uses (such as surveillance) that we may regret.
  • Kate Eichhorn, author of the book The End of Forgetting: Growing Up with Social Media, noted that when her daughter was 13, she signed up for a LinkedIn account (!), because she heard that LinkedIn was search-engine optimized so that it would appear first in the search results for her name. She didn’t want her other social media accounts, or tags of her from her friends’ social media accounts, coloring the Google results for future admissions officers or employers. In her CNI keynote, Kate said that as a media scholar she didn’t think it was helpful to have a moral panic over how social media is shaping the experience of today’s youth, but in listening to how kids feel anxious and constrained by an omnipresent digital superego, I wondered if, for once, it is justified to have a moral panic over new media and kids: social media does seem qualitatively and quantitatively different than prior subjects of panics over teen consumption, such as comic books, heavy metal, or video games.

Some Happier Case Studies

Starting in 2010, Ákos Szepessy and Miklós Tamási began collecting old, discarded photographs they found on the streets of Budapest. Then they invited the public to submit personal and family photos. Fortepan (named after a twentieth-century Hungarian brand of film) now hosts over 100,000 of these photos from the last century. I always loved seeing this kind of longitudinal social documentation when I was at the Digital Public Library of America.

The spirit spread: The University of Northern Iowa took inspiration from Fortepan and created a similar site for Iowans, with thousands of personal photos stretching back to the American Civil War. Then they did something wonderful to return the digitized photographs to the physical world: UNI took some of the photos and plastered them on the very buildings in which the shots were taken many decades earlier.

A century ago, astronomy photographs used to be taken on large glass plates, and rare events in the night sky, such as novae, might have been captured in ways that would help astronomers today. University of Chicago librarians are now digitizing and extracting astronomical data from hundreds of thousands of these glass plates, and then using computational methods to align them precisely with contemporary scans of the sky. (Yes, there’s an API that will take your photo of the stars and provide the exact celestial coordinates.)

This is all terrific, but there is a cautionary (and somewhat amusing) tale lurking on the side. We academics always like to think that people will remember us for our enthralling teaching, breakthrough discoveries, or creative ideas. Maybe we will have the good fortune to have a famous theory, or a celestial body, named after us. But our fate can just as easily be that of Donald Menzel, who was director of the Harvard Observatory in the 1950s and 60s. To save money, he stopped the production and preservation of glass plates for some years, and so now there is a missing section in the historical astronomical record. It is called, with a librarian’s tsk-tsk, the “Menzel Gap.” Ouch.


Briefly Noted

Thomas Padilla has a white paper out on artificial intelligence/machine learning in libraries, from OCLC: “Responsible Operations: Data Science, Machine Learning, and AI in Libraries.” It has many helpful suggestions.

The Council of Library and Information Resources launched its first podcast, Material Memory:

Material Memory explores the effects of our changing environment—from digital technologies to the climate crisis—on our ability to access the record of our shared humanity, and the critical role that libraries, archives, museums, and other public institutions play in keeping cultural memory alive.

Also highly recommended.