A couple of weeks ago at the Digital Dilemmas Symposium in New York I tried something new: using Twitter to replicate digitally the traditional “author’s query,” where a scholar asks readers of a journal for assistance with a research project. I believe the results of this experiment are instructive about the significant advantages—and some disadvantages—for academia of what has come to be known as crowdsourcing.
For those who didn’t follow this experiment live via Twitter, you should first read the two initial posts in this series. The experiment was fairly simple: I prepared followers of my blog and my Twitter feed (as of this writing I have roughly the same number of blog subscribers and Twitter followers, about 1,600 on each service) by noting that I would reveal a historical puzzle at a particular time. At the beginning of my talk in New York, my blog auto-posted the scan of an object found in a Victorian archaeological dig, which I simultaneously tweeted.
I asked those following me online to work together to figure out what the object was. Participants in the experiment could post live comments on Twitter, and others could follow along by searching for the #digdil09 hashtag. (A hashtag is a hopefully unique string of characters that enables a search of Twitter to reveal all comments at a specific conference or on a particular subject.) I encouraged everyone to talk to each other and leverage each other’s knowledge. In addition, I set up what in the age of the print journal would have been a ridiculous deadline: only one hour for the crowd to solve the mystery. For a bit of theater (“stunt lecturing”?) I flashed the Twitter stream behind me from time to time during my talk.
It took much less time than an hour for a solution: nine minutes, to be exact, for a preliminary answer and 29 minutes for a fairly rich description of the object to emerge from the collective responses of roughly a hundred participants. Solution: the object was an ornamental gorget from the Cahokia tribe.
What happened along the way was as interesting as the result (which I have to admit was rather satisfying given the possibility of a live crowd in NYC laughing at me for using Twitter). First, Twitter was remarkably effective in multiplying my voice. Indeed, in the first five minutes about a dozen others on Twitter retweeted (rebroadcast) my mystery to their followers. This “Twitter multiplier effect” meant that within minutes many thousands of people got word of my experiment; over 1,900 actually viewed the object on my blog. And I’m lucky enough to have a particularly knowledgeable crowd following me on Twitter, as you can see from the word cloud of my followers’ bios.
Once the race was on, solvers took two distinct paths toward a solution. The first path was the one I was trying to encourage: some quick thoughts about facets of the object, followed by scholarly debate. I mentioned that the object was made out of shell but was found far away from water in the Midwest (of the U.S.), which led to some interesting speculation about origins and movement of Native Americans, Europeans, and Africans. Others focused on the iconography of the spider; what could it symbolize and which cultures used it? These were decent lines of inquiry that one could imagine in the back pages of a Victorian journal.
Twitter is mocked for its almost comical terseness, but even the most hardened Twitter skeptic must admit tweets such as these are far from useless assistance. And the power of this crowdsourcing is even more evident as you look at the full discussion trail as researchers pick up information from each other to take their speculations a step further.
The experiment was not, however, an unalloyed success, partly due to a mistake I made in setting it up. In hindsight I gave away too much my original post, mentioning St. Clair and the fact that the piece was made out of shell. Alas, Googling keywords such as these (as well as the obvious “spider”) immediately gets one hot on the trail of the solution. It’s clear from the stream of tweets that a good portion of the solving audience took the “Google knows all” approach rather than the “scholarly discussion” approach.
I suppose even this aspect of the experiment is not uninteresting; I’ll leave it to others in the comments below to discuss the merits of the “Google” approach, as well as the merits (and demerits) of this experiment in general.
[Afterword: As many have pointed out on Twitter, the experiment would have been better had I not posted an object that could be found online. To be honest, I thought I had found an unusual object with no scanned version; it shows how much has been digitized, and how good search is even on a small amount of metadata.]