Another Half-baked Idea (in which Scott dangerously treads on librarian toes)- OPACs, OA and Wikipedia

Back in December I had another one of my half-baked ideas that I want to run by the larger community before doing much more on it. One day, while reading a wikipedia article, I thought “This is a well known topic (I can’t recall which now) – wouldn’t it be great if students could automatically be prompted that there were full, scholarly BOOKS in their library on this topic in addition to this brief wikipedia article?” (Don’t get me wrong, I LOVE wikipedia, and to get the overview there is often nothing better, but in some instances it offers only a  brief glimpse of a deep subject, as is an encyclopedia’s proper role.)

Now you all know of my fondness for client-side mashups and augmenting your web experience with OER; this passion was kindled by projects like Jon Udell’s LibraryLookup bookmarklet (annotate Amazon book pages with links to your local library to see if the book is in) and COSL’s OER Recommender (later Folksemantic, a script that annotates pages with links to relevant Open Educational Resources.) What I love about these and similar projects is that they augment your existing workflow and don’t aim at perfection, just to be “good enough.” In all cases, what these types of automated annotation services require are two things: 1) some “knowledge” about the “subject” they are trying to annotate (in the LibraryLookup case the ISBN in the URL, with folksemantic – I’ve never been clear!) and; 2) a source to query  (your local library OPAC/a database of tagged OER resources) hopefully in a wel structured way with an easily parseable response.

So what struck me while looking at the wikipedia page is that (following the Principle of Good Enough) the URLs by and large follow a standard pattern (e.g. where %page_name% is very often a usable “keyword” for a search of some system (condition #1 above) and that library OPACs contain a metric shitload of curated metadata including both keyword and title fields (close to condition #2 above.)

So the first iteration of the idea was “Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links to your local library catalog of books on that subject.”

Now one of the weaknesses of the LibraryLookup approach was the need for a localized version of the script for each OPAC it needed to talk to. Means it doesn’t spread virally as well as it might and is often limited to tech savvy users. So the next obvious (well at least to this non-librarian) iteration was

“Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links to query WorldCat instead”

in the hopes of performing a single query that can then be localized by the user adding their location data in WorldCat. But… as a number of librarian friends who I ran this by pointed out, WorldCat is pay-to-play for libraries, and in BC at least does not have wide coverage at all. Still, a step in the right direction, because further discussion brought me to the last iteration of…

… “Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links that instead of annotating with an OPAC/book references, used fully open resources, but instead of OER (which folksemantic already does), use a service like OAIster with it’s catalogue of 23 million Open Access articles and thesis.”

Liking this idea more and more, I then realized that OAIster had since been incorporated into WorldCat (though I must admit, not finding it very intuitive to figure out how to query *just* OAIster/open access resources).

So this is where I got to, but I was fortunate to talk through the idea with two fantastic colleagues from the library world, Paul Joseph from UBC and Gordon Coleman from BC’s Electronic Library Network. And I am glad I did, because while they didn’t completely squash this idea, they did refer me to a large number of possible solutions and approaches in the library world to look at.

While it’s not “client side,” (which for me is not just a nicety but actually an increasingly critical implementation detail) a small tweak to WorldCat’s keyword search widget embedded in mediawiki/wikipedia looks like it would do the trick.

Paul pointed me towards an existing toolbar, LibX, that is open source, customizable by institution, and extensible that could (and who knows, maybe already does) easily be extended to do this by the looks of it.

Paul also reminded me of the HathiTrust as another potential queryable source, growing all the time.

And the discussion also clue’d me in to the existence of the OpenURL gateway service, which seems very much to solve the issue of localized versions of the librarylookup bookmarklet and the like.

So… is this worth pursuing? Quite possibly not – it seems like pretty well covered ground by the libraries, as it should be, and it’s the type of idea that if it hasn’t been done, I am MORE than happy for someone else to run with it. I am looking for tractable problems like this to ship code on, but I’m just as happy when these ideas inspire others to make improvements to their existing projects. The important things to me are:

  • approaches which meet the users where they already are (in this case Wikipedia or potentially mediawiki)
  • approaches that don’t let existing mounds of expensive metadata go to waste (heck, might as well use it!)
  • approaches that place personalization aspects on the client side; increasingly we will be surfing a “personalized’ web, but approaches that require you to store extensive information *on their servers* in order to get that effect are less desireable; the client is a perfect spot, under the end users control (look, I’m not naive)
  • approaches that fit into existing workflow in the “good enough” or 80/20 approach

I think this fits all of the above; if you have other criteria I’d love to hear them (certainly these aren’t MY only ones either.) If you do know where this idea has been implemented, please let me know. And if my unschooled approach to the wonderful world of online library services ticks any librarians off, my sincerest apologies – I’ve always said that “librarians ae the goal tenders of our institutions” (I was a defence man, this is a big compliment) and my only goal is to bridge what feels like a massive divide between educational technologists, librarians and, most importantly, learners. – SWL

9 thoughts on “Another Half-baked Idea (in which Scott dangerously treads on librarian toes)- OPACs, OA and Wikipedia”

  1. I’m a librarian and I’m not ticked off. I vote for WorldCat. WorldCat is happy to tell users if any given book is “in a library near you” – and many users are savvy enough to use interlibrary loan if a book is not in a nearby library.

  2. Scott, we are thrilled you found WorldCat and the available widgets–and that you then also got the word that WorldCat has indexed OAIster, HathiTrust and most recently the JSTOR Archive–all of which provide great resources for people in need of immediate information.

    The other thing you may want to consider is the WorldCat Basic API. You may find some additional flexibility with the API that you don’t with the pre-defined widget. Flip side is, the widget is ready to go, no coding required. Good news is, either way you choose the OCLC Developer Network is available to help however you need it.

  3. Alice, it’s great that someone from OCLC saw this and replied, I appreciate that. A few specific questions:

    – can you point me to how to search Worldcat for JUST Oaister, or JUST HathiTrust? Is their an advanced search that does this? Is there a way through the API to single out one of these sources? I couldn’t suss it out on my brief foray.

    – Have yu seen anyone already doing what I describe, specifically against Wikipedia, but heck, also against any other source from which a “subject keyword” is easily deriveable, either via deconstructing the URL or via embeded metatdata? I know there is lots of activity along these lines, I don’t claim this idea is particularly original, so I won’t be surprised if this has been done already.

    Thanks again for replying to this shout out into the network, cheers, Scott

  4. Scott, I am checking with colleagues on effective limiters, because I confess to being Webby but not a coder. I know there is an OAIster-specific view for planned but it is not available *quite* yet.

    OCLC staff have all wondered through the years how we could better embed topic-specific, library-vetted results onto information spaces like wikipedia in a meaningful way for the user, without compromising the community norms of wikipedia vis-a-vis self-promotion, etc. In other words, your itch is one we’d love to help you scratch.

    In terms of an embedded subject keyword in a URL, I don’t know if this is exactly what you’re looking for, but check out the “Links” page on ( specifically the bottom “Embed a keyword search for topical search results pages”.

    In the meantime, I’ll work on finding more answers for you about OAister and HathiTrust. Our new Developer Network person, Karen–who IS a coder–may also contact you with possibilities in terms of JSON output, etc.

  5. What an amazing coincidence that I had a conversation along these very same lines not more than 2 days ago with a couple of librarians at the Eccles Health Sciences Library here at the University of Utah. I think you’ve hit on something here. How do we keep the ideas and conversation going?

  6. @corey – I think there are two aspects; one is exploring further (and if warranted, developing) this sort of client-side tool so that anyone can use it on their own w/ wikipedia. But another is looking more at the Worldcat existing widgets etc in the context of your own library’s collections – while these are obviously more limited in overall reach, since you control these (or the librarians do) these are simple additions that can make these sites more valueable to your users.

    I had been hoping to write this up for a small Talis incubator grant, but the reality is I have run out of time (the deadline is the 31st) and it is still not 100% clear to me that there aren’t existing tools that do this (or most definitely, people better suited than I to do it). It’s another one of my “neat ideas, stick it on the pile, when time and interest permit I’l try to hack something kludgy together to prototype it, and if someone else takes it and runs with it before then, so much the better.” I have enough of those that it is now a quotable phenomenon!

Comments are closed.