Category: Software

Creating a Blog from Scratch, Part 5: What is XHTML, and Why Should I Care?

In prior posts in this series (1, 2, 3, and 4), I described with some glee my rash abandonment of common blogging software in favor of writing my own. For my purposes there seemed to be some key disadvantages to these popular packages, including an overemphasis on the calendar (I just saw the definition of a blog at the South by Southwest Interactive Festival—”a page with dated entries”—which, to paraphrase Woody Allen, is like calling War and Peace “a book about Russia”), a sameness to their designs, and comments that are rarely helpful and often filled with spam. But one of the greatest advantages of recent blog software packages is that they generally write standards-compliant code. More specifically, blog software like WordPress automatically produces XHTML. Some of you might be asking, what is XHTML, and who cares? And why would I want to spend a great deal of effort ensuring that this blog complied strictly with this language?

The large digital library contingent that reads this blog could probably enumerate many reasons why XHTML compliance is important, but I had two reasons in mind when I started this blog. (Actually, I had a third, more secretive reason that I’ll mention first: Roy Rosenzweig and I argue in our book Digital History that XHTML will likely be critical for digital humanists to adhere to in the future—don’t want to be accused of being a hypocrite.) For those for whom web acronyms are Greek, XHTML is a sibling of XML, a more rigorously structured and flexible language than the HTML that underlies most of the web. XHTML is better prepared than HTML to be platform-independent; because it separates formatting from content, XHTML (like XML) can be reconfigured easily for very different environments (using, e.g., different style sheets). HTML, with formatting and content inextricably combined, for the most part assumes that you are using a computer screen and a web browser. Theoretically XHTML can be dynamically and instantaneously recast to work on many different devices (including a personal computer). This flexibility is becoming an increasingly important feature as people view websites on a variety of platforms (not just a normal computer screen, e.g., but cell phones or audio browsers for the blind). Indeed, according to the server logs for this blog, 1.6% of visitors are using a smart phone, PDA, or other means to read this blog, a number that will surely grow. In short, XHTML seems better prepared than regular HTML to withstand the technological changes of the coming years, and theoretically should be more easily preserved than older methods of displaying information on the web. For these and other reasons a 2001 report the Smithsonian commissioned recommended the institution move to XHTML from HTML.

Of course, with standards compliance comes extra work. (And extra cost. Just ask webmasters at government agencies trying to make their websites comply with Section 508, the mandatory accessibility rules for federal information resources.) Aside from a brief flirtation with the what-you-see-is-what-you-get, write-the-HTML-for-you program Dreamweaver in the late 1990s, I’ve been composing web pages using a text editor (the superb BBEdit) for over ten years, so my hands are used to typing certain codes in HTML, in the same way you get used to a QWERTY keyboard. XHTML is not that dissimilar from HTML, but it still has enough differences to make life difficult for those used to HTML. You have to remember to close every tag; some attributes related to formating are in strange new locations. One small example of the minor infractions I frequently trip up on writing XHTML: the oft-used break tag to add a line to a web page must “close itself” by adding a slash before the end bracket (not <br>, but <br />). But I figured doing this blog would give me a good incentive to start writing everything in strict XHTML.

Yeah, right. I clearly haven’t been paying enough attention to detail. The page you’re reading likely still has dozens of little coding errors that make it fail strict compliance with the World Wide Web Consortium’s XHTML standard. (If you would like a humbling experience that brings to mind receiving a pop quiz back from your third-grade teacher with lots of red ink on it, try the W3C’s XHTML Validator.) I haven’t had enough time to go back and correct all of those little missing slashes and quotation marks. WordPress users out there can now begin their snickering; their blog software does such mundane things for them, and many proudly (and annoyingly) display little “XHTML 1.0 compliant” badges on their sites. Go ahead, rub it in.

After I realized that it would take serious effort to bring my code up to code, so to speak, I sat back and did the only thing I could do: rationalize. I didn’t really need strict XHTML compliance because through some design slight-of-hand I had already been able to make this blog load well on a wide range of devices. I learned from other blog software that if you put the navigation on the right rather than the more common left you see on most websites, the body of each post shows up first on a PDA or smart phone. It also means that blind visitors don’t have to suffer through a long list of your other posts before getting to the article they want to read.

As far as XHTML is concerned, I’ll be brushing up on that this summer. Unless I move this blog to WordPress by then.

Part 6: One Year Later

Creating a Blog from Scratch, Part 4: Searching for a Good Search

It often surprises those who have never looked at server logs (the detailed statistics about a website) that a tremendous percentage of site visitors come from searches. In the case of the Center for History and New Media, this is a staggering 400,000 unique visitors a month out of about one million. Furthermore, many of these visitors ignore a website’s navigation and go right to the site search box to complete their quest for information. While I’m not a big fan of consultants that tell webmasters to sacrifice virtually everything for usability, I do feel that searching has been undervalued by digital humanities projects, in part because so much effort goes into digitization, markup, interpretation, and other time-consuming tasks. But there’s another, technical reason too: it’s actually very hard to create an effective search—one, for instance, that finds phrases as well as single words, that is able to rank matches well, and that is easy to maintain through software and server upgrades. In this installment of “Creating a Blog from Scratch” (for those who missed them, here are parts 1, 2, and 3) I’ll take you behind the scenes to explain the pluses and minuses of the various options for adding a search feature to a blog, or any database-driven website for that matter.

There are basically four options for searching a website that is generated out of a database: 1) have the database do it for you, since it already has indexing and searching built in; 2) install another software package on your server that spiders your site, indices it, and powers your search; 3) use an application programming interface (API) from Google, Yahoo, or MSN to power the search, taking search results from this external source and shoehorning them into your website’s design; 4) outsourcing the search entirely by passing search queries to Google, Yahoo, or MSN’s website, with a modifier that says “only search my site for these words.”

Option #1 seems like the simplest. Just create an SQL statement (a line of code in database lingo) that sends the visitor’s query to the database software—in the case of this blog, the popular MySQL—and have it return a list of entries that match the query. Unfortunately, I’ve been using MySQL extensively for five years now and have found its ability to match such queries less than adequate. First of all, until the most recent version of the MySQL it would not handle phrase searching at all, so you would have to strip quotation marks out of queries and fool the user into believing your site could do something that it couldn’t (that is, do a search like Google could). Secondly, I have found its indexing and ranking schemes to be far behind what you expect from a major search engine. Maybe this has changed in version 5, but for many years it seemed as if MySQL was using search principles from the early 1990s, where the number of times a word appeared on the page signified how well the page matched the query (rather than the importance of the place of each instance of the word on the page, or even better, how important the document was in the constellation of pages that contained that word). MySQL will return a fraction from 0 to 1 for the relevance of a match, but it’s a crude measure. I’m still not convinced, even with the major upgrades in version 5, that MySQL’s searching is acceptable for demanding users.

Option #2 is to install specialized search packages such as the open source ht://Dig on your server, point it to your blog (or website) and let it spider the whole thing, just as Google or Yahoo does from the outside. These software packages can do a decent job indexing and swiftly finding documents that seem more relevant than the rankings in MySQL. But using them obviously requires installing and maintaining another complicated piece of software, and I’ve found that spiders have a way of wandering beyond the parameters you’ve set for them, or flaking out during server upgrades. (Over the last few days, for instance, I’ve had two spiders request hundreds of posts from this blog that don’t exist. Maybe they can see into the future.) Anecdotally, I also think that the search results are better from commercial services such as Google or Yahoo.

I’ve become increasingly enamored of Option #3, which is to use APIs, or direct server-to-server communications, with the indices maintained by Google, Yahoo, or Microsoft. The advantage of these APIs is that they provide you with very high quality search results and query handling (at least for Google and Yahoo; MSN is far behind). Ranking is done properly, with the most important documents (e.g., blog posts that many other bloggers link to or that you have referenced many times on your own site) coming up first if there are multiple hits in the search results. And these search giants have far more sophisticated ways of handling phrase searches (even long ones) and boolean searches than MySQL. The disadvantage of APIs is that for some reason the indices made available to software developers are only a fraction the size of the main indices for these search engines, and are only updated about once a month. So visitors may not find recent material, or some material that is ranked fairly low, through API searches. Another possibility for Option #3 is to use the API for a blog search engine, rather than a broad search engine. For instance, Technorati has a blog-specific search API. Since Technorati automatically receives a ping from my Atom feed every time I post (via FeedBurner), it’s possible that this (or another blog search engine) will ultimately provide a solid API-based search.

I’ve been experimenting with ways of getting new material into the main Google index swiftly (i.e., within a day or two rather than a month or two), and have come up with a good enough solution that I have chosen Option #4: outsourcing the search entirely to Google, by using their free (though unfortunately ad-supported) site-specific search. With little fanfare, this year Google released Google Sitemaps, which provides an easy way for those who maintain websites, especially database-driven ones, to specify where all of their web pages are using an XML schema. (Spiders often miss web pages generated out of a database because there are often so many of them, and some of these pages may not be linked to.) While not guaranteeing that everything in your sitemap will be crawled and indexed, Google does say that it makes it easier for them to crawl your site more effectively. (By the way, Google’s recent acquisition of 5 percent of AOL seems to have been, at least ostensibly, very much about providing AOL with better crawls, thus upping the visibility of their millions of pages without messing with Google’s ranking schemes.) And—here’s the big news if you’ve made it this far—I’ve found that having a sitemap gets new blog posts into the main Google index extremely fast. Indeed, usually within 24 hours of submitting a new post Google downloads my updated sitemap (created automatically by a PHP script I’ve written), sees the new URL for the post, and adds it to its index. This means that I can very effectively use the Google’s main search engine for this blog, although because I’m not using the API I can’t format the results page to match the design of my site exactly.

One final note, and I think an important one for those looking to increase the visibility of their blog posts (or any web page created from a database) in Google’s search results: have good URLs, i.e., ones with important keywords rather than meaningless numbers or letters. Database-driven sites often have such poor URLs featuring an ugly string of variables, which is a shame, since server technology (such as Apache’s mod_rewrite) allows webmasters to replace these variables with more memorable words. Moreover, Google, Yahoo, and other search engines clearly favor keywords in URLs (very apparent when you begin to work with Google’s Web API), assigning them a high value when determining the relevance of a web page to a query. Some blog software automatically creates good URLs (like Blogger, owned by Google), while many other software packages do not—typically emphasizing the date of a post in the URL or the page number in the blog. For my own blogging software, I designed a special field in the database just for URLs, so I can craft a particularly relevant and keyword-laden string. Mod_rewrite takes care of the rest, translating this string into an ID number that’s retrieved by the database to generate the page you’re reading.

For many reasons, including making it accessible to alternative platforms such as audio browsers and cell phones, I wanted to generate this page in strict XHTML, unlike my old website, which had poor coding practices left over from the 1990s. Unfortunately, as the next post in this series details, I failed terribly in the pursuit of this goal, and this floundering made me think twice about writing my own blogging software when existing packages like WordPress will generate XHTML for you, with no fuss.

Part 5: What is XHTML, and Why Should I Care?

Creating a Blog from Scratch, Part 3: The Double Life of Blogs

In the first two posts in this series, I discussed the origins of blogs and how they led to certain elements in popular blog software that were in some cases good and in others bad for my own purposes—to start a blog that consisted of short articles on the intersection of digital technology, the humanities, and related topics (rather than my personal life or links with commentary). What I didn’t realize as I set about writing my own blog software from scratch for this project was that in truth a blog leads two lives: one as a website and another as a feed, or syndicated digest of the more complete website. Understanding this double life and the role and details of RSS feeds led to further thoughts about how to design a blog, and how certain choices are encoded into blogging software. Those choices, as I’ll explain in this post, determine to a large extent what kind of blog you are writing.

Creating a blog from scratch is a great first project for someone new to web programming and databases. It involves putting things into a database (writing a post), taking them out to insert into a web page, and perhaps coding some secondary scripts such as a search engine or a feed generator. It stretches all of the scripting muscles without pulling them. And in my case, I had already decided (as discussed in the prior post) I didn’t need (or want) a lot of bells and whistles, like a system for allowing comments or trackback/ping features. So I began to write my blogging application with the assumption that it would be a very straightforward affair.

Indeed, at first designing the database couldn’t have been simpler. The columns (as database fields are called in MySQL) were naturally an ID number, a title, the body of the post (the text you’re reading right now), and the time posted (while I disparaged time in my last post I thought it was important enough to have this field for reference). But then I began to consider two things that were critical and that I thought popular blogging software did well: automatically generating an RSS feed (or feeds) and—for some blog software—creating URLs for each post that nicely contain keywords from the title. This latter feature is very important for visibility in search engines (the topic of my next post in this series).

I know a lot about search engines, but knew very little about RSS feeds, and when I started to think about what my blogging application would need to autogenerate its own feed, I realized the database had to be slightly more elaborate than my original schema. I had of course heard of RSS before, and the idea of syndication. But before I began to think about this blog and write the software that drives it I hadn’t really understood its significance or its complexity. It’s supposed to be “Really Simple Syndication” as one of the definitions for the acronym RSS asserts, and indeed the XML schemas for various RSS feeds are relatively simple.

But they also contain certain assumptions. For instance, all RSS feeds have a description of each blog post (in the Atom feed, it’s called the “summary”), but these descriptions vary from feed to feed and among different RSS feed types. For some it is the first paragraph of the post, for others the first 100 characters, and for still others it’s a specially crafted “teaser” (to use the TV news lingo for lines like “Coming up, what every parent needs to know to prevent a dingo from eating their baby”). Along with the title to the post this snippet is generally the only thing many people see—the people who don’t visit your blog’s website but only scan it in syndication.

So what kind of description did I want, and how to create it? I liked the idea of an autogenerated snippet—press “post” and you’re done. On the other hand, choosing a random number of characters out of a hat that would work well for every post seemed silly. I’m used to writing abstracts for publications, so that was another possibility. But then I would have to do even more work after I finished writing a post. And I also needed to factor in that the description had to prod people to look at the whole post on my website, something the “teaser” people understood. So I decided to compromise and leave part to the code and part to me: I would have the software take the first paragraph of the entire post and use that as the default summary for the feed, but I had the software put a copy of this paragraph in the database so I could edit it if I wanted to. Automated but alterable. And I also decided that I needed to change my normal academic writing style to have more of an enticing end to every first paragraph. The first paragraph would be somewhere between an abstract and a teaser.

Having made this decision and others like it for the other fields in the feed (should I have “channels”? why does there have to be an “author” field in a feed for a solo blog?), I had to choose which feed to create: RSS 1.0, RSS 2.0, Atom? Why there are so many feed types somewhat eludes me; I find it weird to go to some blogs that have little “chicklets” for every feed under the sun. I’ve now read a lot about the history of RSS but it seems like something that should have become a single international standard early on. Anyway, I wanted the maximum “subscribability” for my blog but didn’t want to suffer writing a PHP script to generate every single kind of feed.

So, time for outsourcing. I created a single script to generate an Atom 1.0 feed (I think the best choice if you’re just starting out), which is picked up by FeedBurner and recast into all of those slightly incompatible other feed types depending on what the subscriber needs.

This would not be the first time I would get lazy and outsource. In the next part of this series, I discuss the different ways one can provide search for a blog, and why I’m currently letting Google do the heavy lifting.

Part 4: Searching for a Good Search

Creating a Blog from Scratch, Part 2: Advantages and Disadvantages of Popular Blog Software

In the first post in this series I briefly recounted the early history of blogs (all of five years ago) and noted how many of their current uses have diverged from two early incarnations (as a place to store interesting web links and as the online equivalent of a diary). Unfortunately, these early, dominant forms gave rise to existing blog software that, at least in my mind, is problematic. This “encoding” of original purposes into the basic structure of software is common in software development, and it often leads to features and configurations in later releases that are undesirable to a large number of users. In this post, I discuss the advantages and disadvantages of common blog packages—often deeply encoded into the software.

There are many good reasons to use popular blog software like Moveable Type, Blogger, or WordPress, almost too many to mention in this space. Here are some of the most important reasons, many of them obvious and others perhaps less so:

  • From a single download or by signing up with a service, you get a high level of functionality immediately and can focus on the content of your blog rather than its programming.
  • Their web designs make them instantly recognizable as the genre “blog,” thus making new visitors feel comfortable. For instance, most of them list posts in reverse chronological order, with “archives” that contain posts segmented by months and calendars marking days on which you have recently posted.
  • They allow people with little time or technical expertise to generate a site with well-formed, standards-compliant web code (most recently XHTML).
  • They automatically generate an RSS feed.
  • They have large user bases and active developers, which makes for relatively quick responses to annoyances such as blog spam.
  • They have lots of neat “social” features, such as feedback mechanisms (e.g., comments), and tracking (to see who has linked to one of your posts).
  • Some blog software automatically creates relatively good URLs (more on that in a later post in this series, and why good URLs are important to a blog).
  • Some blog services allow you to post via email and phone in addition to using a web browser.

Some of the disadvantages of popular blog software are merely the flip side of some of these advantages:

  • Even with the many templates blog software comes with, their web designs make most blogs look alike. Yes, you can easily figure out that a site is a blog, but on the other hand they begin to blend together in the mind’s eye. The web is made for variety, not sameness. In addition, you really have to work hard to fit a blog seamlessly into a broader site.
  • The tyranny of the calendar. There’s too much attention to chronology rather than content and the associations between that content. You can almost hear your blog software saying,
    “Boy, Dan had a pretty thin November, posting-wise,” taunting you with that empty calendar, or calling attention to the fact that your last post was “56 days ago.” Quality should triumph over quantity or frequency. Taking the emphasis off of time—perhaps not entirely, but a great deal—seemed to me to be a good first step for my own blog software (you’ll note that I only have a greyed-out date below the big red headline and a tiny “date string” in the buttons for each post). Obviously it makes sense to have recent posts highest on the page, but there may also be older posts that are still relevant or popular with visitors that you would like to highlight or reshuffle back into the mix. “Categories” have helped somewhat in this regard, and now post tagging (folksonomy) presents more hope. But I want to have full control over the position of my posts, recategorize them at will, have breakouts (like this series), different visual presentations, etc. And no thank you to the calendars or monthly archives.
  • Large installed user bases, as those who use Microsoft products will tell you, leads to unsavory attacks. Note the enormous proliferation of blog spam in the last year, mostly done by automated programs that know exactly how to find WordPress comment fields, Moveable Type comment fields, etc. Sure, there are now mechanisms for defending against these attacks, but when you really think about it…
  • The comment feature of blogs is vastly overrated anyway. My back-of-the-envelope calculation is that 1% of blog comments are useful to other readers. A truly important comment will be emailed to the writer of the blog, as I encourage readers to do at the end of every post. Moreover, increasingly a better place for you to comment on someone’s blog is on your own blog, with a link to their post. Indeed, that’s what Technorati and other blog search engines have figured out, and now you can acquire of a feed of comments about your blog from these third parties without opening up your blog to comment spam. (This also eliminates the need for trackback technology in your blog software.) So: no comments on my blog. Sorry. Don’t need the hassle of deleting even the occasional blog spam, and as readers of this blog have already done in droves (thanks!) you can email me if you need to. I’ll be happy to post your comments in this space if they help clarify a topic or make important corrections.
  • The search function is often not very good on blogs, even though search is how many people navigate sites. And trying to have a search function that simultaneously searches a blog and a wider site can be very complicated.
  • Like most software, there is a factor of “lock-in” when you choose an existing blog software package or service. It’s not entirely simple to export your material to a different piece of software. And many blog software packages have made this worse by encouraging posts written with non-standard (i.e., non-XHTML) characters that are used for formatting or style (as with Textile) and are converted to XHTML equivalents on the fly. This makes writing blog posts slightly faster. But if you export those posts, you will lose the important character translations.

Following this assessment of the advantages and disadvantages of popular blog software, I set about creating my own basic software that would easily fit into the web design you see here. Of course, I was throwing the baby out with the bath water by writing my own blog code. Couldn’t I just turn off the comments feature? Didn’t I want that easy XHTML compliance? Come on, are the designs so bad (they’re actually not, especially WordPress’s, but they are fairly similar across blogs)? Don’t I want to be able to phone in a post, or email one from a BlackBerry? (OK, the answer is no on both of those counts.)

But as I mentioned at the beginning of this series, I wanted to learn by doing and making. I didn’t know much about RSS. Which kind of RSS feed was best? How do you make an RSS feed, anyhow? I’ve thought a great deal about searching and data-mining, but what was the best way to search a blog? Were there ways to make a blog more searchable?

With these questions and concerns in mind, I started writing a simple PHP/MySQL application, and began to think about how I would make up for the lack of some of the advantages I’ve outlined above (hint: outsourcing). In the next post in this series, I’ll walk you through the basic setup and puzzle at the variety of RSS feeds.

Part 3: The Double Life of Blogs

Creating a Blog from Scratch, Part 1: What is a Blog, Anyway?

If you look at the bottom of this page, you won’t see any of the telltale signs that it is generated by a blog software package like Blogger, Moveable Type, or WordPress. When I was redesigning this site and wanted to add a blog to it, I made the perhaps foolhardy decision to write my own blogging software. Why, you might ask, would I recreate the proverbial wheel? As I’ll explain in several other columns in this space, writing your own software is one of the best ways to learn—not only about how to write software, but also about genres and to think about (and rethink) some of the assumptions that go into the construction of software written for specific genres. The first question I therefore asked myself was, What is a blog, anyway?

Seems like an easy question. But it’s really not. Blogs began literally as “Web logs,” as logs of links to websites that people thought were interesting and wanted to share with others. Hip readers of this blog will recognize, however, that this task has recently shifted to new “Web 2.0” services like del.icio.us, Furl, and Digg. Many blogs continue to include links to other websites, of course, perhaps with some commentary added, but the blog is quickly becoming the wrong place to merely list a bunch of links.

Following this initial purpose, the blog became a place for early adopters to write about events in their lives. In other words, the closest cognate to an offline genre was the diary. As I’ll argue in my next post in this series, this phase has made a permanent (and not entirely positive) mark on blogging software. Let me say for now that it led to what I would call “the tryanny of the calendar” (note all of those calendars on blogs).

Many bloggers (including this one), however, aren’t writing diary entries. We’re passing along information to readers, some of it topical and time-based and some of it not. For instance, I found a great post on Wally Grotophorst’s blog about getting rid of the need to type in passwords (which I do a lot). Is this topic essentially about Friday, October 28th, 2005 at 2:43 pm as WordPress emphasizes in Wally’s archive? No. It’s about “Faster SSH Logins,” as his effectively terse title suggests. This led me to my first idea for my own blogging software: emphasize, above all, the subject matter and the content of each post.

It also made me realize something even more basic. At heart a blog is very simple: it’s merely a way to dynamically create a site out of a series of “posts,” in the same way that a website consists of a series of web pages. Blogs now do all kinds of things: rant and rave, sell products, provide useful tips, record profound thoughts. With this in mind I started writing some PHP code and setting up a database that would serve this blog, but in a much simpler way than Blogger, Moveable Type, and WordPress.

In Part II of this series, I’ll discuss the advantages and disadvantages of existing blogging software, and how I tried to retain the positives, remove the negatives, and create a much slimmer but highly useful and flexible piece of software that anyone could write with just a little bit of programming experience.

Part 2: Advantages and Disadvantages of Popular Blog Software

Q and A on Firefox Scholar

Thanks so much everyone who emailed me over the past week in response to hearing about Firefox Scholar. It’s great to get a sense that a wide range of people (from a number of countries) feel that the time has come for this kind of enhanced scholarly web browser, and it gives our team at the Center for History and New Media a great deal of confidence as we move forward. I’ve received a lot of questions about the project, so I thought I would answer some of the common ones here.

I thought the name of this software was going to be Smartfox. Where did Firefox Scholar come from?
Yes, it’s true that the original name of the project was Smartfox, but after filing our grant proposal to the Institute of Museum and Library Services we discovered that someone had “created” (if we can generously call it that) a copy of an early version of Firefox and renamed it Smartfox. They also acquired the smartfox.org domain name. As so often happens with overlapping domains and names, if you search on Google for “Smartfox” you now get a confusing mix of hits which makes it seem like we are already producing the software. To make things clearer for everyone, we’ve changed the name to Firefox Scholar (which we actually like better). We plan on posting information about the project at firefoxscholar.org (but not yet).

Are you going to use X, Y, or Z protocol/schema to acquire metadata about books, articles, and other objects on the web?
The short answer is yes. In other words, we plan on taking advantage of all existing standards and metadata schemas to acquire metadata into one’s Firefox Scholar folder. OAI, OpenURL, COinS—you name it, we want Firefox Scholar to be compatible with it and take advantage of it. The idea of Firefox Scholar is that there will be tiny widgets for each of these that will enable the browser to natively recognize and store their formats.

I want to beta test the software as soon as it’s available. And by the way, when will it be available?
Great! Just let me know via email and I’ll put you on the list. We plan to make an early version of the software available in the late summer of 2006.

Will the software be available for Internet Explorer?
Sorry, it’s only a set of extensions for Firefox. We understand that a lot of people use Internet Explorer, but among other things (better security, free and openly available source code) Firefox has terrific ways of extending and enhancing it. In turn, we would like to make our own software extensible and enhanceable, and encourage other developers to make additions to it.

I’m a developer and would like to get involved with the project. How can I do so?
Please contact me.

Introduction to Firefox Scholar

This week in the electronic version, and next week in the print version, the Chronicle of Higher Education is running an article (subscription required) on a new software project I’m co-directing, Firefox Scholar, which will be a set of extensions to the popular open source web browser that will help researchers, teachers, and students. My thanks to the many people who have emailed who are interested in the project. For them and for others who would like to know more, here’s a brief summary of Firefox Scholar from our grant proposal to the Institute for Museum and Library Services, which has generously provided $250,000 to initiate the project. Please contact me if you would like occasional updates on the project or would like a beta release of the browser when it is available in the late summer of 2006.

The web browser has become the primary means for accessing information, documents, and artifacts from libraries and museums around the country and the world, thanks in large part to the tremendous commitment these institutions have made to bringing their collections online (as either simple citations or complete text and images). Unfortunately for scholars, while tens of millions of dollars have been spent to create digital resources, far less funding and effort has been allocated for the development of tools to facilitate the use of these resources. The browser remains merely a passive window allowing one to view, but not easily collect, annotate, or manipulate these objects. Moreover, from the user’s perspective individual library and museum collections remain just that—separate websites with distinct designs and different ways of displaying their information, making traditional scholarly practices of bringing together and studying objects of interest from across these collections unnecessarily difficult.

Firefox Scholar, a set of tools incorporated into popular, open, and free web software, will address these major problems by creating a web browser that is “smarter” in two key ways. First, one tool will enable the browser to intelligently sense when its user is viewing a digital library or museum object; this will allow the browser to capture information from the page automatically, such as the creator, title, date of creation, and copyright information. Second, another tool will store and organize this information, as well as full copies of items and web pages (not just their citation information) if so desired by the user and permitted by the institution’s site, allowing the user to sort, annotate, search, and manipulate these individualized collections created for scholarly purposes. Critically, all of this will occur within the web browser itself, not in a separate, standalone application; the web browser will be used not just to discover information, but also to collect, organize, and analyze scholarly materials.