Author: Dan Cohen

A Companion to Digital Humanities

The entirety of this major work (640 pages, 37 chapters), edited by Susan Schreibman, Ray Siemens, and John Unsworth, is now available online. Kudos to the editors and to Blackwell Publishing for putting it on the web for free.

Creating a Blog from Scratch, Part 8: Full Feeds vs. Partial Feeds

One seemingly minor aspect of blogs I failed to consider carefully when I programmed this site was the composition of its feed. (Frankly, I was more concerned with the merely technical question of how to write code that spits out a valid RSS or Atom feed.) Looking at a lot of blogs and their feeds, I just assumed that the standard way of doing it was to put a small part of the full post in the feed—e.g., the first 50 words or the first paragraph—and then let the reader click through to the full post on your site. I noticed that some bloggers put their entire blog in their feed, but as a new blogger—one who had just spent a lot of time redesigning his old website to accommodate a blog—I couldn’t figure out why one would want to do that since it rendered your site irrelevant. It may seem minor, but a year later I’ve realized that there is, in part, a philosophical difference between a full and partial feed. Choosing which type of feed you are going to use means making a choice about the nature of your blog—and, surprisingly, the nature of your ego too. Subscribers to this blog’s feed have probably noticed that as of my last post I’ve switched from a partial feed to a full feed, so you already know the outcome of the debate I’ve had in my head about this distinction, but let me explain my reasoning and the advantages and disadvantages of full and partial feeds.

Putting the entire content of your blog into your feed has many practical advantages. Most obviously, it saves your readers the extra step of clicking on a link in their feed reader to view your full post. They can read your blog offline as well as online, and more easily access it on a non-computer device like a cell phone. Machine audiences can also take advantage of the full feed, searching it for keywords desired by other machines or people. For instance, most blog search engines allow you to set up feeds for posts from any blogger that contain certain words or phrases.

More important, providing a full feed conforms better with a philosophy I’ve tried to promote in this space, one of open access and the sharing of knowledge. A full feed allows for the easy redistribution of your writing and the combination of your posts with others on similar topics from other bloggers. A full feed is closer to “open source” than a feed that is tied to a particular site. For this reason, until the advent of in-feed advertising, most professional bloggers had partial feeds so readers would have to view advertising next to the full text of a post.

Even from the perspective of a non-commercial blogger—or more precisely the perspective of that blogger’s ego—full feeds can be slightly problematic. A liberated, full feed is less identifiably from you. As literary theorists know well, reading environments have a significant impact on the reception of a text. A full feed means that most of your blog’s audience will be reading it without the visual context of your site (its branding, in ad-speak), instead looking at the text in the homogenized reading environment of a feed reader. I’ve just switched from NetNewsWire to Google Reader to browse other blogs, and I especially like the way that Google’s feed reader provides a seamless stream of blog posts, one after the other, on a scrolling web page. I’m able to scan the many blogs I read quickly and easily. That reading style and context, however, makes me much less aware of specific authors. It makes the academic blogosphere seem like a stream of posts by a collective consciousness. Perhaps that’s fine from an information consumption standpoint, but it’s not so wonderful if you believe that individual voices and perspectives matter a great deal. Of course, some writers cut through the clutter and make me aware of their distinctive style and thoughts, but most don’t.

At the Center for History and New Media, we’ve been thinking a lot about the blog as a medium for academic conversation and publication—and even promotion and tenure—and the homogenized feed reader environment is a bit unsettling. Yes, it can be called academic narcissism, but maintaining authorial voice and also being able to measure the influence of individual voices is important to the future of academic blogging.

I’ve already mentioned in this space that I would like to submit this blog as part of my tenure package, for my own good, of course, but also to make a statement that blogs can and should be a part of the tenure review process and academic publication in general. But tenure committees, which generally focus on peer-reviewed writing, will need to see some proof of a blog’s use and impact. Right now the best I can do is to provide some basic stats about the readership of this blog, such as subscriptions to the feed.

But with a full feed, you can slowly loose track of your audience. Providing your entire posts in the feed allows anyone to resyndicate it, aggregate it, mash it up, or simply copy it. I must admit, I am a little leery of this possibility. To be sure, there are great uses for aggregation and resyndication. This blog is resyndicated on a site dedicated to the future of the academic cyberinfrastructure, and I’m honored that someone thought to include this modest blog among so many terrific blogs charting the frontiers of libraries, technology, and research. On the other hand, even before I started this blog I had experiences where content from my site appeared somewhere else for less virtuous reasons. I don’t have time to tell the full story here, but in 2005 an unscrupulous web developer used text from my website and a small trick called a “302 redirect” to boost the Google rankings of one of his clients. It was more amusing than infuriating—for a while a dentist in Arkansas had my bio instead of his. More seriously, millions of spam blogs scrape content from legitimate blogs, a process made much easier if you provide a full feed. And there are dozens of feed aggregators that will create a website from other people’s content without their permission. Regardless of the purpose, above board or below, I have no way of knowing about readers or subscribers to my blog when it appears in these other contexts.

But these concerns do not outweigh the spirit and practical advantages of a full feed. So enjoy the new feed—unless you’re that Arkansas dentist.

Part 9: The Conclusion

Creating a Blog from Scratch, Part 7: Tags, What Are They Good For?

Evidently quite a few things. In the past few years, tags have been attached to virtually everything, from web links to photos to bars. The University of Pennsylvania has recently introduced a way for those on campus to tag items in their online catalog, Franklin. With the arrival of the Zotero server this year, it will be possible for the community of Zotero users to collaboratively tag almost any object of research, from books to sculptures to letters. For their promoters, tags are a low-cost, democratic advance over traditional systems of cataloging. Detractors disparage tags as lacking the rigor of those tried-and-true methods. As I started to think about the composition of this blog, all I wanted to know was, why do so many blogs have tags all over them and what function or functions do they serve? Do I need them? What are they good for?

I have to admit that when I started this blog I had a visceral dislike of tags, probably because I was approaching them from the perspective of an academic who liked the precision and professionalism of the card catalog and encyclopedia. Tags seemed fatally flawed as putative successors to Library of Congress subject headings or the indexes in the back of books. I still believe the much-ballyhooed “tag clouds,” or set of tags of various sizes arranged in a pattern to show the contents of a blog or book or site, are poor substitutes for a good index of a work—not only because indexes are usually done by professionals who know what to highlight and how to summarize those topics, but also because indexes tell little stories through their levels, modifiers, and page numbers. For instance, here’s a section of the index the talented Jim O’Brien did for my book Equations from God:

Euclid, 165; in mathematics education, 147, 148, 214n185; Elements by, 21, 106, 138, 179, 180, 214n185; long-lasting influence of, 21, 58, 79, 147, 164, 174; waning influence of, in late Victorian era, 138, 148, 164, 178-179, 180 (see also non-Euclidean geometry)

At a glance you can tell the story line about Euclid—the ancient Greek mathematician’s incredibly long relevance (well into the modern era), and his eventual fall from grace in the nineteenth century in the face of a new kind of geometry. Some have proposed adding the hierarchical levels and other index-like features to tags to approach this level of usefulness, but that misses the point of tagging: it works because it’s done in a simple, generally offhand way. Add a lot of thought and hurdles to the process, and you’ll kill tagging. Tagging is a classic case of the “good enough” besting the “perfect” in new media.

Despite my hesitancy, I figured that there must be some reason to use tags on this blog. So I included them in the database but chose, due to my initial aversion, not to show them all over my site like many blogs do. They would just sit in the background and in the RSS feed. It turned out that was a very good compromise as I began to appreciate that tags are good at some functions that traditional taxonomies don’t address.

Much of the antagonism between the promoters and detractors of tags seems to arise from the sense—I believe, the incorrect sense—that they are competitors for the same market. But when you actually look at tags in action and actuality, it’s clear that they serve a number of functions that are distinct from the traditional cataloging functions and that make them poor replacements for high-quality categorization.

For example, look at the variety of tags on a highly used folksonomic site like del.icio.us, the grandaddy of social bookmarking. To be sure, there are some fine categorizations of websites. But del.icio.us also harbors a large number of tags with other aims. Coexisting with tags that might be at home in a Library of Congress subject heading (e.g., “history”) are tags like “readlater” (busy people marking a site as worth going back to when they get the chance), “hist301” (a tag used by students in a particular class for a particular semester), “natn” (used by listeners of the podcast “Net at Nite” to submit websites to the hosts for consideration), and of course every possible variation of “cool” (to signify a site’s…coolness).

Awareness of these other kinds of tags made me realize that what distinguishes tags from traditional forms of categorization, aside from the obvious amateur/democratic vs. professional distinction, is that while both are forms of description, tags often have specific audiences and time frames in mind, while traditional categorizations (such as Library of Congress subject headings) have only a vague general audience in mind and try to be as timeless as possible.

This distinction is particularly true when you realize that tags are strongly interwoven with feeds (RSS). Since people can subscribe to the feed of a tag, tagging a blog post in effect places it into a live, running stream of alerts to an awaiting audience. Want to alert John Musser, who maintains the list of APIs I have frequently referred to in this space, about a new API? Just tag a blog post “API” or “APIs” and I suspect John will hear about it very soon, as will a very large audience of those interested in knitting together information on the web.

Thus tags have a great utility on the “live” web, as the blog search engine Technorati calls it, as well as for personal uses of an individual or microaudiences like a college class or even for inane commentary (“awesome”). Yet I still feel that as an entrée into a blog, as the equivalent of scanning a table of contents or the index of a book, they are fairly poor. I had planned to expose my internal tags of posts to the audience of this blog in some “traditional” blog way—at the bottom of each post, down the left sidebar, in a tag cloud—but it didn’t seem helpful. If someone wants to find all of my posts on copyright, they can search for them in the upper right search box. And the tag clouds I’ve tried all seem to misrepresent the overall thrust of this blog since (like everyone else using tags) I haven’t put a lot of thought into the tags.

My hunch early on was that tags are best heard from but not seen, and I think I was mostly right about that.

Next up in the series: I make my first change to the blog, from a partial feed to a full feed, and explain the advantages and disadvantages of both—and why I’ve decided to switch.

Part 8: Full Feeds vs. Partial Feeds

Creating a Blog from Scratch, Part 6: One Year Later

Well, it’s been over a year since I started this blog with a mix of trepidation, ambivalence, and faint praise for the genre—not exactly promising stuff—and so it’s with a mixture of relief and a smidgen of smug self-satisfaction that I’m writing this post. I’m extremely glad that I started this blog last fall and have kept it going. (Evidently the half-life of blogs is about three months, so an active year-old blog is, I suppose, some kind of accomplishment in our attention-deficit age.) I thought it would be a good idea (and several correspondents have prodded me in this direction) to return to my series of posts about starting this blog, “Creating a Blog from Scratch.” (For latecomers, this blog is not powered by Blogger, TypePad, or WordPress, but rather by my own feeble concoction of programming and design.) Over the next few posts I’ll be revisiting some of the decisions I made, highlighting some good things that have happened and some regrets. And at the end of the series I’ll be introducing some adjustments to my blog that I hope will make it better. But first, in something of a sequel to my call to my colleagues to join me in this endeavor, “Professors, Start Your Blogs,” some of the triumphs and tribulations I’ve encountered over the last year.

As the five-part series on creating this blog detailed, I took the masochistic step of writing my own blog software (that’s probably a little too generous; it’s really just a set of simple PHP scripts with a MySQL database) because I wanted to learn about how blogs were put together and see if I agreed with all of the assumptions that went into the genre. That learning experience was helpful (and judging by the email still I get about the series others have found it helpful), but I think I have paid a price in some ways. I will readily admit I’m jealous of other bloggers with their fancy professional blogging software with all of the bells and whistles. Worse, much of the blogosphere is driven by the big mainstream software packages like Blogger, TypePad, and WordPress; having your own blog software means you can’t take advantage of cutting-edge features, like new forms of searching or linking between blogs. But I’m also able to tweak the blog format more readily because I wrote every line of the code that powers this blog.

As I wrote in “Welcome to My Blog,” and as regular readers of this blog know well, I’m not a frequent poster. Sometimes I lament this fact when I see blogs I respect maintain a frantic pace. I’ve written a little over 60 posts (barely better than one per week, although with the Zotero crunch this fall the delays between posts has grown). Many times I’ve felt I had something to post to the blog but just didn’t get around to writing it up. I’m sure other bloggers know that feeling of missed opportunity, which is of course a little silly considering that we’re doing this for free, in our spare time, in most cases without a gun to our heads. But you do begin to feel a responsibility to your audience, and there’s no one to pawn that responsibility off on—you’re simultaneously the head writer, editor, and publisher.

On the other hand, I just did a quick database query and was astonished to discover I’ve written almost 40,000 words in this space (about 160 pages, double-spaced) in the last twelve months. Most posts were around 500-1000 words, with the longest post (Professors, Start Your Blogs) at close to 2000 words. Had you told me that I would write the equivalent of half a book in this space last fall, a) I wouldn’t have believed it, and b) I probably wouldn’t have started this blog.

One of the reasons bloggers feel pressure to post, as I’ve discovered over the last year, is that it’s fairly simple to quantify your audience, often in excruciating detail. As of this writing this blog is ranked 34,181 out of 55 million blogs tracked by Technorati. (This sounds pretty good—the top 1/100th of a percent of all blogs!—until you realize that there are millions of abandoned and spam blogs, and that like most Internet statistics, the rankings are effectively logarithmic rather than linear. That is, the blog that is ranked 34th is probably a thousand times more prominent than mine; on the other hand, this blog is approximately a thousand times more prominent than the poor blogger at 34,000,000.) Because of that kind of quantification, temptations abound for courting popularity in a way that goes against your (or at least my) blog’s mission. I’ve undoubtedly done some posts that were a little unnecessary and gratuitously attention-seeking. For instance, the most-read post over the last year covered the fingers that have crept into Google’s book scanning project, which of course in its silliness got a lot of play on the popular social news site Digg.com and led to thousands of visitors on the day I posted it and an instant tripling of subscribers to this blog’s feed. But I’m proud to say that my subsequent more serious posts immediately alienated the segment of Digg who are overly fond of exclamation points and my numbers quickly returned to a more modest—but I hope better targeted— audience.

Surely the happiest and most unexpected outcome of creating this blog has been the way that it has gotten me in touch with dozens of people whom I probably would not have met otherwise. I meet other professional historians all the time, but the blog has introduced me to brilliant and energetic people in libraries, museums, and archives, literary studies, computer science, people within and outside of academia. Given the balkanization of the academy and its distance from “the real world” I have no idea how I would have met these fascinating people otherwise, or profited from their comments and suggestions. I have never been to a conference where someone has come up to me out of the blue and said, “Hi Dan, I’m so-and-so and I wanted to introduce myself because I loved the article you wrote for such-and-such journal.” Yet I regularly have readers of this blog approach me out of the blue, and in turn I seek out others at meetings merely because of their blogs. These experiences have made me feel that blogging has the potential to revitalize academia by creating more frequent interactions between those in a field and, perhaps more important, between those in different fields. So: thanks for reading the blog and for getting in touch!

Next up in the anniversary edition of “Creating a Blog from Scratch”: it’s taken me a year, but I finally weigh in on tagging.

Part 7: Tags, What Are They Good For?

Understanding the 2006 DMCA Exemptions

If Emerson was correct that genius is the ability to hold two contradictory ideas in the mind simultaneously, the American legal system just gained enough IQ points to join Mensa. Already, our collective legal mind was showing its vast intelligence trying to square the liberties of the people with the demands of government and industry. For instance, in Alaska you can possess up to an ounce of marijuana legally, but can be charged with a felony for possessing more than four ounces or for selling the “illegal” drug. (Lesson: don’t buy in bulk.) If you’re gay you can legally join the United States military, but you can’t talk about being gay, because that’s illegal and you will be discharged. And now, more pretzel logic: as of last week, it is illegal to break the copy protection on a DVD or distribute “circumvention” technologies, but if you’re a film or media studies professor you can break the copy protection for pedagogical uses. But how, you might ask, would a film or media studies professor with no background in encryption, programming, and hacking crack the copy protection on a DVD?

Good question. It was the first question I posed last weekend to Peter Decherney as my addled brain tried to grasp the significance of the new exemptions to the DMCA granted by the Librarian of Congress, James Billington. Peter is a professor at the University of Pennsylvania and deserves all of our thanks for spearheading the effort to put some cracks into the DMCA. (Full disclosure: Peter is a very good friend. But I still think—objectively—that he deserves an enormous amount of praise for persevering in the face of the MPAA’s lawyers to get the exemption for film professors. He told me the MPAA doggedly fights every proposed exemption, reasonable or not, so this was a long way from a trivial exercise.) It’s unfortunate to see many initial reactions to the new exemptions lamenting that they are only for three years or that they merely enshrine the DMCA’s destruction of fair use principles.

Well, sure. These new exemptions are indeed limited in scope and in an ideal world Peter and his colleagues should not have had to ask for these rights or fight for months to get them. (And then do the process all over again in 2009.) But there are a few bright spots here for those of us who believe that the balance between the rights of copyright owners and users of their content has swung much too far in the direction of the former.

First, as Peter pointed out to me, the exemption for film and media studies professors is the first time an exemption has been carved out for a class of people. It’s not hard to imagine how this opens the door for other groups of people to evade the strict rules of the DMCA. Most obviously, many of my colleagues in the History and Art History department at George Mason University use film clips in their courses. Shouldn’t they be exempt too? Shouldn’t a psychology professor who wants to store clips from films on her hard drive to show in class as illustrations of mental phenomena be allowed to do so? The MPAA will undoubtedly say no every step of the way, but you can see how a well-reasoned and reasonable march of exemptions will begin to restore some sanity to the copyright regime. Academia could merely be the beachhead.

Second, and related to the first point, getting a DMCA exemption is a daunting task, especially for those of us without legal training. Peter and his colleagues have provided a blueprint for academics seeking other exemptions in the future. It would be good if they could pass along their wisdom. Thankfully, they have already set up a website that will serve as a clearinghouse of information for the “educational use of media” exemption. A plainspoken description of how they got the exemption in the first place would be helpful as well.

Finally, the new exemptions have raised the odd contradiction I mentioned in the introduction to this piece, a contradiction that helpfully highlights the absurdity of current law. Film professors can now legally proceed in their work (saving clips from DVDs for their classes), except that they have to break the law to do this legal work (by encouraging and participating in an illegal market for cracking software). Similar absurdities abound in the digital realm; recently the MPAA went after a company that fills iPods with video from DVDs the iPod owners have bought.

So now the question becomes: Does our legal system follow the dictates of Emerson’s genius, or of common sense? And how do those moderate pot smokers in Alaska get their marijuana, anyway?

Intelligence Analysts and Humanities Scholars

About halfway through the Chicago Colloquium on Digital Humanities and Computer Science last week, the always witty and insightful Martin Mueller humorously interjected: “I will go away from this conference with the knowledge that intelligence analysts and literary scholars are exactly the same.” As the chuckles from the audience died down, the core truth of the joke settled in—for those interested in advancing the still-nascent field of the digital humanities, are academic researchers indeed becoming clones of intelligence analysts by picking up the latter’s digital tools? What exactly is the difference between an intelligence analyst and a scholar who is scanning, sorting, and aggregating information from massive electronic corpora?

Mueller’s remark prods those of us exploring the frontiers of the digital humanities to do a better job describing how our pursuit differs from other fields making use of similar computational means. A good start would be to highlight that while the intelligence analyst sifts through mountains of data looking for patterns, anomalies, and connections that might be (in the euphemistic argot of the military) “actionable” (when policy makers piece together bits of intelligence and decide to take action), the digital humanities scholar should be looking for patterns, anomalies, and connections that strengthen or weaken existing theories in their field, or produce new theories. In other words, we not only uncover evidence, but come to overarching conclusions and make value judgments; we are at once the FBI, the district attorney, the judge, and the jury. (Perhaps the “National Intelligence Estimates” that are the highest form of synthesis in the intelligence community come closest to what academics do.)

The gentle criticism I gave to the Chicago audience at the end of the colloquium was that too many presentations seemed one (important) piece away from completing this interpretive whole. Through extraordinary guile, a series of panelists showed how digital methods can determine the gender of Shakespeare’s interlocutors, show more clearly the repetition of key phrases in Gertrude Stein’s prose, or more clearly map the ideology and interactions of FDR’s advisors during and after Pearl Harbor. But of course the real questions that need to be answered—answers that will make other humanities scholars stand up and take notice of digital methods—are, of course, how the identification of gender reshapes (or reinforces) our views of Shakespeare’s plays, how the use of repetition changes our perspectives on Gertrude Stein’s writings, or how a better understanding of presidential advisors alters our historical narrative of America’s entry into the second World War.

In Chicago, I tried to give this critical, final moment of insight reached through digital means a name—the “John Snow moment”—in honor of the Victorian pharmacist who discovered the cause of cholera by using a novel research tool unfamiliar to traditional medical science. Rather than looking at symptoms or other patient information on a case-by-case basis as a cholera outbreak killed and sickened hundreds of people in London in 1854, Snow instead mapped all incidences of the disease by the street addresses of the patients, thus quickly discovering that the cases clustered around a Soho water pump. The city council removed the water pump’s handle, quickly curtailing the disease and inaugurating a new era of epidemiology. Snow proved that cholera was a waterborne disease. Now that’s actionable intelligence.

What can digital scholars do to reach this level of insight? A key first step, reinforced by my experience in Chicago, is that academics interested in the power of computational methods must work to forge tools that satisfy their interpretive needs rather than simply accepting the tools that are currently available from other domains of knowledge, like intelligence. Ostensibly the Chicago Colloquium was about bringing together computer scientists and humanities scholars to see how we might learn from each other and enable new forms of research in an age of millions of digitized books. But as I noted in my remarks on the closing panel, too often this interaction seemed like a one-way street, with humanities scholars applying existing computer science tools rather than engaging the computer scientists (or programming themselves) to create new tools that would be better suited to their own needs. Hopefully such new tools will lead to more John Snow moments in the humanities in the near future.

Zotero Needs Your Help, Part II

In my prior post on this topic, I mentioned the (paid) positions now available at the Center for History and New Media to work on and promote Zotero. (By the way, there’s still time to contact us if you’re interested; we just started reviewing applications, but hurry.) But Zotero is moving ahead on so many fronts that its success depends not only on those working on it full time, but also those who appreciate the software and want to help out in other ways. Here are some (unpaid, but feel-good) ways you can get involved.

If you are a librarian, instructional technologist, or anyone else on a campus or at an institution that uses citation software like EndNote or RefWorks, please consider becoming an informal campus representative for Zotero. As part of our effort to provide a free competitor to these other software packages, we need to spread the word, have people give short introductions to Zotero, and generally serve as local “evangelists.” Already, two dozen librarians who have tried Zotero and think it could be a great solution for students, staff, and faculty on their campuses have volunteered to help out in this role. If you’re interested in joining them, please contact campus-reps@zotero.org.

We are currently in the process of writing up instructions (and possibly creating some additional software) to make creating Zotero translators and citation style formatters easier. Translators are small bits of code that enable Zotero to recognize citation information on a web page; we have translators for specific sites (like Amazon.com) as well as broader ones that recognize certain common standards (like MARC records or embedded microformats). Style formatters take items in your Zotero library and reformat them into specific disciplinary or journal standards (e.g., APA, MLA, etc.). Right now creating translators takes a fair amount of technical knowledge (using things like XPath and JavaScript), so if you’re feeling plucky and have some software skills, email translators@zotero.org to get started on a translator for a specific collection or resource (or you can wait until we have better tools for creating translators). If you have some familiarity with XML and citation formatting, please contact styles@zotero.org if you’re interested in contributing a style formatter. We figure that if EndNote can get their users to contribute hundreds of style formatters for free, we should be able to do the same for translators and styles in the coming year.

One of our slogans for Zotero is “Citation management is only the beginning.” That will become increasingly obvious over the coming months as third-party developers (and the Zotero team) begin writing what we’re calling utilities, or little widgets that use Zotero’s location in the web browser to send and receive information across the web. Want to pull out all of the place names in a document and map them on Google Maps? Want to send del.icio.us a notice every time you tag something in Zotero? Want to send text from a Zotero item to an online translation service? All of this functionality will be relatively trivial in the near future. If you’re familiar with some of the browser technologies we use and that are common with Web 2.0 mashups and APIs and would like to write a Zotero utility, please contact utilities@zotero.org.

More generally, if you are a software developer and either would like to help with development or would like to receive news about the technical side of the Zotero project, please contact dev@zotero.org.

With Firefox 2.0 apparently going out of beta into full release next Thursday (October 26, 2006), it’s a great time to start talking up the powerful combination of Firefox 2.0 and Zotero (thanks, Lifehacker and the Examiner!).

Zotero Is Here

For those who haven’t heard yet (it’s amazing how quickly the word spreads through the blogosphere and beyond): On October 5, 2006, at 10:47 p.m. ET, the public beta of Zotero went live on our spiffy new site. In addition to releasing the software to all comers, we’ve also expanded the documentation and set up areas of the site for Zotero users and those who want to build upon the software. If you have a question or want to discuss Zotero, we have some forums too. A few other release notes:

Remember that you’ll need Firefox 2.0 to run Zotero. Fortunately, Mozilla has just posted release candidate 2 of Firefox 2.0, which means that the final version is imminent and there’s virtually no reason not to upgrade. (If you have other Firefox extensions that don’t work with Firefox 2.0, the creators of those extensions had better get to work.)

Already, coverage of the launch has been fairly extensive, with some early reviews going up on blogs. Check our our home page for a live (and unfiltered) feed of what people are saying.

If you want some behind the scenes discussion about Zotero, check out Dan Chudnov’s podcast interview of me, Josh Greenberg, and Dan Stillman. The podcast has several exclusives, including the other names Zotero could have had (and why we went with an Albanian word).

As a beta release, Zotero still has a few rough edges, and undoubtedly it won’t please everyone on every matter. But we think it’s pretty darn good for a 1.0 beta and the basis for even better releases and features in the near future. And more important, as our unofficial motto from Voltaire at the Center for History and New Media asserts, “The perfect is the enemy of the good.” Had we gone for perfection, no one would be using the software today (or even next year). Zotero is actually shipping, and it’s free. So give it a try and tell your friends.

More here soon.

Zotero Needs Your Help, Part I

We’re ramping up here at Zotero headquarters for the big release of the public beta (it should be out next week). But we’re already thinking ahead to great new features—including nifty ways to share and collaborate, as I mentioned in my last post on Zotero—and to building not only a large and active user community, but also a community to help disseminate, support, and further develop this free and open software. In short, we need your help! In this post I’ll let you know about the official George Mason University announcements for full-time positions at CHNM (sorry for the officialese and also for the repetitiveness; it’s necessary to post these as they are recorded with GMU Human Resources). In the next post, I’ll let you know about other opportunities to help out.

Senior Programmer: The Center for History & New Media (http://chnm.gmu.edu) at George Mason University is seeking a programmer to work primarily on Zotero (http://www.zotero.org), an open source bibliographic management and note-taking tool for the Firefox web browser. Applicants should have an advanced knowledge of JavaScript, XUL, XML, CSS, and other technologies critical for Firefox development, such as XPCOM. Applicants should also have a working knowledge of PHP, Java, and MySQL, and have solid command-line Linux skills. Ability to work in a team is very important. This is a grant-funded, two-year position at the Center for History and New Media (http://chnm.gmu.edu), which is known for innovative work in digital media. Located in Fairfax, Virginia, CHNM is 15 miles from Washington, DC, and accessible by public transportation. Please send a cover letter, resume, and three references to chnm@gmu.edu with subject line “senior programmer.” Applications without a cover letter and resume will not be considered. The cover letter should include salary requirements and a description of relevant programming projects and experience. We will begin considering applications on 10/15/2006 and continue until the position is filled.

Technology Outreach Coordinator: The Center for History & New Media at George Mason University is seeking a technology outreach coordinator for Zotero (http://www.zotero.org), an open source bibliographic management and note-taking tool for the Firefox web browser. The technology outreach coordinator will be responsible for building alliances with scholarly organizations and libraries, encouraging scholars to try Zotero, developing and maintaining user documentation, and building awareness of this next-generation research tool. We are looking for an energetic, well-organized individual with excellent written and oral communication skills. Applicants should have at least some graduate training in library science or one of the humanities or social science disciplines as well as familiarity with relevant technologies (e.g., XML, RDF, metadata standards, and Firefox extensions) and scholarly research practices. This is a grant-funded, two-year position at the Center for History and New Media (http://chnm.gmu.edu) at George Mason University, which is known for innovative work in digital media. Located in Fairfax, Virginia, CHNM is 15 miles from Washington, DC, and accessible by public transportation. Please send letter of application, CV, or resume, and three references to chnm@gmu.edu with the subject line “Technology Outreach Coordinator.” We will begin considering applications October 15, 2006, and continue until the position is filled.

Web Designer: The Center for History & New Media at George Mason University is seeking a web designer and developer. We require an energetic and well-organized individual to work on a variety of innovative, web-based history projects. This position is particularly appropriate for someone with a combined interest in technology and history. The successful applicant will be able to create mockups and wireframes for historical, cultural, and educational websites and bring those ideas to fruition using the latest and highest web development standards. Fluency with current web design technologies (including ability to hand code HTML, CSS, and Javascript) and familiarity with web accessibility and web usability standards are essential. Some familiarity with web-database technologies (MySQL, PHP), contemporary trends in web development (e.g., AJAX, DHTML and DOM Scripting, Rails) and multimedia and graphic design applications (Flash, including ActionScript, Final Cut Pro, Photoshop, Illustrator) is a plus, as is prior work in history or the humanities. This is a grant-funded two-year position at the Center for History and New Media (http://chnm.gmu.edu), which is known for innovative work in digital media. Located in Fairfax, Virginia, CHNM is 15 miles from Washington, DC, and accessible by public transportation.Please send a resume, three references, links to prior web/multimedia work, and a cover letter describing technology background and any interest in history to chnm@gmu.edu with subject line “Web Designer.” Salary: $32-40K plus excellent benefits. We will begin considering applications on 10/15/2006 and continue until the position is filled.

About CHNM: Since 1994, the Center for History and New Media at George Mason University has used digital media and computer technology to change the ways that people—scholars, students, and the general public—learn about and use the past. We do that by bringing together the most exciting and innovative digital media with the latest and best historical scholarship. We believe that serious scholarship and cutting edge multimedia can be combined to promote an inclusive and democratic understanding of the past as well as a broad historical literacy that fosters deep understanding of the most complex issues about the past and present. CHNM’s work has been internationally recognized for cutting-edge work in history and new media. Located in Fairfax, Virginia, CHNM is 15 miles from Washington, DC, and is accessible by public transportation.

Please also see the second post in this series for other exciting opportunities to help out and extend Zotero.

NEH Digital Humanities Start-Up Grants

Brett Bobley, the CIO at the National Endowment for the Humanities and the chair of the new (and very exciting) Digital Humanities Initiative, wrote to me to ask for some publicity for their programs, especially for the Digital Humanities Start-Up Grants. Happy to do so. (Undoubtedly I’ll apply for this at some point in the future and could use less competition, so I probably should keep quiet…but duty and dedication to this blog’s audience calls.) The Start-Up Grants seem like a great way to initiate a project like Zotero. From Brett:

Digital Humanities Start-Up Grants

Deadline: November 15, 2006 & April 3, 2007

Digital Humanities Start-Up Grants is the first new program under the NEH’s new Digital Humanities Initiative. The name “Start-Up Grant” is deliberately evocative of the technology start-up—a company like an Apple Computer or a Google that took a brilliant idea and, with a small amount of seed money, was able to grow it into a new way of doing business. NEH’s Digital Humanities Start-Up Grants will encourage scholars with bright new ideas and provide the funds to get their projects off the ground. Some projects will be practical, others completely blue sky. Some will fail while others will succeed wildly and develop into important projects. But all will incorporate new ways of studying the humanities.

The cross-divisional nature of the Start-Up Grants is a key. Applicants don’t need to be concerned with determining exactly which NEH division or program is best suited for their projects. Their job is to be innovative and the NEH’s job is to provide the funding they need to be successful. NEH staff will work with potential applicants in the pre-application stages to help them craft their submissions.

NEH Digital Humanities Start-Up Grants are offered for the planning or initial stages of digital humanities initiatives in all areas of NEH concern: research, publication, preservation, access, teacher training, and dissemination in informal or formal educational settings. Applications should describe the concept or problem that is being addressed, the plan of work, the experience of the project team as it relates to the plan, and the intended outcomes of both the grant and the larger project that the grant will initiate.

Application guidelines for this program are available at:

http://www.neh.gov/grants/guidelines/digitalhumanitiesstartup.html

General information about the NEH’s Digital Humanities Initiative is available at:

http://www.neh.gov/grants/digitalhumanities.html

Questions? Please contact: dhi@neh.gov