Humane Ingenuity 36: 15% Faster

In a wonderful new article, film and television scholar Jason Mittell provides an extremely creative, occasionally bizarre, frequently hilarious, and ultimately rather helpful “inventory of deformative practices” to uncover hidden layers of meaning in media. These practices use the malleability of digital formats to convert traditional media, like films, into new forms that provide insight into their art.

Or put less academically: What can we learn about staid video culture from TikTok and GIFs, or the stranger, more elastic memes enabled by contemporary video editing software?

Mittell chose a perfect film to run transformative digital experiments on: the canonical musical Singin’ in the Rain. In one experiment, for instance, he used software to isolate Gene Kelly’s hands and feet, which, in masking the rest of his dancing body and the set in black, shows Kelly’s talent and energy literally in a new light:

(Jason Mittell, “Singin’ in the Rain” with only Gene Kelly’s hands and feet)

In three minutes, you can see how Kelly explores all possible permutations of hands and feet in the four quadrants of the frame, often in furious succession. (Beyond film criticism, I could imagine this isolation technique being used in dance instruction.)

Borrowing from a popular meme, Mittell also created a version in which every time someone sings the word “dance,” the film speeds up by 15%, which gets enjoyably wild around the two minute mark of this clip. In its absurdity it also reveals the deeply manic nature of the film.

(Jason Mittell, SINGIN’ IN THE RAIN’s “Broadway Melody,” but faster every time someone sings “Dance” (constant audio pitch))

That same accentuated mania, mixed with a dose of creepiness — what Mittell identifies as being trapped in some kind of dance purgatory with forced smiles — is highlighted in GIF loops extracted from the movie:

Similar to what Cath Sleeman did with a large, chronological photo gallery of household items (see HI33), a “bar code” version of the colors used in the film, which is a distillation of pixel frequency from the beginning of the movie to the end (left to right), shows the scenes of Singin’ as vertical bands of a segment’s dominant color:

Note how this reveals the waves, or crescendos, of activity and color that happen roughly every 15-20 minutes in the movie, followed by some calm, muted (colorwise, musically) rest moments. A nice summary of the film’s pacing.

Other experiments are more dreamlike and resist easy interpretation, such as this condensation of the film to two minutes using frame sampling, which works as standalone art not so far from an Italian Futurist painting:

(Jason Mittell, SINGIN’ IN THE RAIN summed in two minutes)

Mittell wonders aloud about his experiments:

Are they acts of scholarship? Do they contain or provoke arguments? Or are they creative works, more akin to experimental films?

Yes, yes, and yes. Or more simply, these techniques provide new ways to notice elements of film through different foci and new perspectives. A GIF, for example, is a film loop that lets us concentrate better than other formats on framing and motion, because it is pure, unadulterated repetition and circular connections. You can see the amazing camerawork (probably on a moving crane) in the GIF at the top of this newsletter. Similarly, novel mixtures of a film’s pixels through software allow us to see more accurately some overall patterns, or hidden, meaningful details of the director’s staging. The use of advanced technology, in short, can open up broad interpretive avenues.

Here’s the full article, “Deformin’ in the Rain: How (and Why) to Break a Classic Film,” which is currently in “preview” mode in Digital Humanities Quarterly.


Also from this latest issue of DHQ: “Comparative K-Pop Choreography Analysis through Deep-Learning Pose Estimation across a Large Video Corpus,” which is self-recommending.

Some time ago there were efforts to create markup languages for domains of human expression such as dance. It seems that these machine learning and computer vision techniques have made that earlier work somewhat obsolete.


Next week at the Coalition for Networked Information’s spring meeting, Barbara Rockenbach (Yale’s University Librarian) and I will be commenting on the promise and challenges of a new platform called Sourcery, which aims to provide an efficient, decentralized way to digitize archives and special collections. Tom Scheinfeldt and Greg Colati of the University of Connecticut are leading and representing the project. (Full disclosure: Tom is an old friend and collaborator, and we frequently share ideas, so I will admit up front to bias in favor of Sourcery.)

The context: In HI13 and HI14, I discussed a recent survey of historians that showed how quickly historical research has changed because of the smartphone and its camera. Researchers who normally would have slowly paged through an archive have become high-speed human scanning machines, taking as many photos of documents in the archive as possible, and then analyzing them when they get home. For those who cannot travel to an archive, there is also a burgeoning, informal market for graduate students and others to do this phone snapping for them. At the same time, archives and special collections are engaged in a more formal, slower, and higher-quality process of digitization.

Into this new world of archival practice comes Sourcery, which was originally intended to match historians to those who could make scans for them, since not every researcher knows someone across the country or the world who could do this work, and few researchers have extensive travel budgets. But just recognizing the already existing practice and thinking about a facilitating platform created some tense (but helpful) discussions last fall, in a series of workshops that our library (Northeastern) and UConn jointly held. This tension understandable; if a (too simplistic) elevator pitch for Sourcery was “Uber for archives,” well, there is not a lot of love for Uber among those who might use or support Sourcery.

But that’s the short version. I think the longer version has to account for the viewpoints of all of the actors in this drama: the researchers (those with resources and those without), the archivists, those who might be paid to make references scans of the materials (which very well could be the archive or archivists themselves!)—and it also has to account for future researchers who might want to access a scan, as well as the curious general public who might never go to an archive but has some interest in their contents. That is a much more complicated story, with many tradeoffs and tough choices about who we choose to listen to or privilege. There are also tough choices about labor and resource allocation.

Tom, Greg, and the Sourcery team are, of course, extremely sensitive to all of this, and have been flexible and thoughtful about implementation and uses. There has been some good collaborative work in our libraries and archives about the direction that Sourcery should take, and how it should balance the needs and concerns of all of those actors.

To be continued in a subsequent edition of Humane Ingenuity.


On the latest What’s New podcast, I talk to Jim McGrath, one of the curators of A Journal of the Plague Year, which has been collecting stories and digital artifacts over the past twelve months. It’s a wide-ranging conversation that delves into the creation of prior online archives, including the September 11 Digital Archive (which I was involved in) and Our Marathon, which documented the Boston Marathon bombing.

Mostly, it’s about what we choose to save, and from whom, and what we’ve learned so far from those images and stories of the pandemic. Tune in.