A New Direction (and either a clearer understanding or some serious rationalization)

I am pleased to say that starting August 19th I will be the new Systems Manager for the BC Libraries Cooperative. I am equal parts stoked and daunted by this opportunity. Stoked, because the Coop is doing some fantastic work in shared services using open source software in a sector, public libraries, that I’ve always felt a strong affinity for. Daunted because, well, I’m not a librarian and regardless of some exposure to library tech and standards, it’s a fairly new field for me. Still, a lot of the role is familiar to me, so I look forward to a few months of intense immersion as I get going, but I wouldn’t have taken the job (and presumably they wouldn’t have hired me) if I didn’t think I could do it.

It’s been 7 months since I left BCcampus. Seven months of rest, of growth, of uncertainty, of trying to figure out what comes next. For a while I looked for something in ed tech (but Victoria’s not that big a town), and then for a while I contemplated consulting work. I still plan to do some of that, but I decided for now I needed something more regular.

One of the things I struggled with in taking this new job was whether in doing so I am shutting the door on a 20 year career in educational technology, higher ed computing, and for the last decade, open education. The library world has its own technologies, its own history and language, its own set of challenges.

But as I’ve sat with it, I’ve come to realize that there has always been a thread in the work I’ve done and in the interests I’ve pursued which I think runs through this new job too and helps me see how this is a progression rather than a digression. For the last decade, in addition to working on open content (something I know I’ll find in the library world too) I’ve come to see the importance of civicly-owned, openly governed platforms for computing. When Web 2.0 came along, its appeal from a usability and motivation perspective was obvious and I was an early supporter of using these technologies for teaching and learning. But, slow learner that I am, it took me a couple of years longer to realize what a few of my colleagues had seen early on, that for all its advantages and appeal, the cloud had a dangerous flipside of centralized control and commercialized interest. I believe in both of these areas, education and libraries, there is still a window of opportunity to implement open systems that will give us the best of both worlds, the flexibility and efficiencies of the cloud but in a non-corporatized way that preserves so many of the values on which an open democratic society depends. And I look forward to the opportunity to work on this with a new set of colleagues in the library world, while still hopefully maintaining connections and building more bridges back to the world of teaching and learning online. At least for now, that’s my story and I’m sticking to it. – SWL

Another Half-baked Idea (in which Scott dangerously treads on librarian toes)- OPACs, OA and Wikipedia

Back in December I had another one of my half-baked ideas that I want to run by the larger community before doing much more on it. One day, while reading a wikipedia article, I thought “This is a well known topic (I can’t recall which now) – wouldn’t it be great if students could automatically be prompted that there were full, scholarly BOOKS in their library on this topic in addition to this brief wikipedia article?” (Don’t get me wrong, I LOVE wikipedia, and to get the overview there is often nothing better, but in some instances it offers only a  brief glimpse of a deep subject, as is an encyclopedia’s proper role.)

Now you all know of my fondness for client-side mashups and augmenting your web experience with OER; this passion was kindled by projects like Jon Udell’s LibraryLookup bookmarklet (annotate Amazon book pages with links to your local library to see if the book is in) and COSL’s OER Recommender (later Folksemantic, a script that annotates pages with links to relevant Open Educational Resources.) What I love about these and similar projects is that they augment your existing workflow and don’t aim at perfection, just to be “good enough.” In all cases, what these types of automated annotation services require are two things: 1) some “knowledge” about the “subject” they are trying to annotate (in the LibraryLookup case the ISBN in the URL, with folksemantic – I’ve never been clear!) and; 2) a source to query  (your local library OPAC/a database of tagged OER resources) hopefully in a wel structured way with an easily parseable response.

So what struck me while looking at the wikipedia page is that (following the Principle of Good Enough) the URLs by and large follow a standard pattern (e.g. http://en.wikipedia.org/wiki/%page_name%) where %page_name% is very often a usable “keyword” for a search of some system (condition #1 above) and that library OPACs contain a metric shitload of curated metadata including both keyword and title fields (close to condition #2 above.)

So the first iteration of the idea was “Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links to your local library catalog of books on that subject.”

Now one of the weaknesses of the LibraryLookup approach was the need for a localized version of the script for each OPAC it needed to talk to. Means it doesn’t spread virally as well as it might and is often limited to tech savvy users. So the next obvious (well at least to this non-librarian) iteration was

“Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links to query WorldCat instead”

in the hopes of performing a single query that can then be localized by the user adding their location data in WorldCat. But… as a number of librarian friends who I ran this by pointed out, WorldCat is pay-to-play for libraries, and in BC at least does not have wide coverage at all. Still, a step in the right direction, because further discussion brought me to the last iteration of…

… “Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links that instead of annotating with an OPAC/book references, used fully open resources, but instead of OER (which folksemantic already does), use a service like OAIster with it’s catalogue of 23 million Open Access articles and thesis.”

Liking this idea more and more, I then realized that OAIster had since been incorporated into WorldCat (though I must admit, not finding it very intuitive to figure out how to query *just* OAIster/open access resources).

So this is where I got to, but I was fortunate to talk through the idea with two fantastic colleagues from the library world, Paul Joseph from UBC and Gordon Coleman from BC’s Electronic Library Network. And I am glad I did, because while they didn’t completely squash this idea, they did refer me to a large number of possible solutions and approaches in the library world to look at.

While it’s not “client side,” (which for me is not just a nicety but actually an increasingly critical implementation detail) a small tweak to WorldCat’s keyword search widget embedded in mediawiki/wikipedia looks like it would do the trick.

Paul pointed me towards an existing toolbar, LibX, that is open source, customizable by institution, and extensible that could (and who knows, maybe already does) easily be extended to do this by the looks of it.

Paul also reminded me of the HathiTrust as another potential queryable source, growing all the time.

And the discussion also clue’d me in to the existence of the OpenURL gateway service, which seems very much to solve the issue of localized versions of the librarylookup bookmarklet and the like.

So… is this worth pursuing? Quite possibly not – it seems like pretty well covered ground by the libraries, as it should be, and it’s the type of idea that if it hasn’t been done, I am MORE than happy for someone else to run with it. I am looking for tractable problems like this to ship code on, but I’m just as happy when these ideas inspire others to make improvements to their existing projects. The important things to me are:

  • approaches which meet the users where they already are (in this case Wikipedia or potentially mediawiki)
  • approaches that don’t let existing mounds of expensive metadata go to waste (heck, might as well use it!)
  • approaches that place personalization aspects on the client side; increasingly we will be surfing a “personalized’ web, but approaches that require you to store extensive information *on their servers* in order to get that effect are less desireable; the client is a perfect spot, under the end users control (look, I’m not naive)
  • approaches that fit into existing workflow in the “good enough” or 80/20 approach

I think this fits all of the above; if you have other criteria I’d love to hear them (certainly these aren’t MY only ones either.) If you do know where this idea has been implemented, please let me know. And if my unschooled approach to the wonderful world of online library services ticks any librarians off, my sincerest apologies – I’ve always said that “librarians ae the goal tenders of our institutions” (I was a defence man, this is a big compliment) and my only goal is to bridge what feels like a massive divide between educational technologists, librarians and, most importantly, learners. – SWL

Delicious Subject Guides: Maintaining Subject Guides Using a Social Bookmarking Site

http://journal.lib.uoguelph.ca/index.php/perj/article/view/328/1375

Too bad the Canadian Journal of Library and Information Practice and Research articles don’t allow comments, or I would have added “Great idea, and if you combine it with the Google Custom Search engine API like we did on the Free Learning site, you can also turn these ‘subject guides’ into constrained search engines.” But alas, journals are just so one way… – SWL

Library Mashup Competition Winners

http://www.talis.com/tdn/forum/84

I am currently participating in a cool exercise in prognostication on emerging technologies and learning and one of my votes/pleas for a disruptive technology in the academy is “mashups” (which I realize aren’t properly a specific “technology” so much as a technique, but whatever.)

So it was with great pleasure that I stumbled on Jenny Levine’s post on the Talis Library Mashup competition. The full list of entries is here, and while it feels a bit tame, it is definitely a start. The library seems definitely like one of the potential on-campus sources to be mashed up. What are the others? Well, to serve as the basis for a mashup, on my read at least, you need to be providing 2 things; some data and a way to get at it (an API, web service/XML feeds, screenscraping, or other mechanism for access, the more public the better). And there’s the rub, it seems. While more and more Web2.0 companies (holy cow – 291 on this list) are offering APIs that are being mashed up (arguably often with a still-unknown value proposition) is your IS department publishing the API for your SIS on your campus website? You CMS? Why would they do this? Well, that’s the other side of the mashup phenomenom – often-times the companies making their data available don’t yet know all the ways it could be used, but appear to be correct in the belief that if you publish it, it will get used, often in unexpected or improved ways.

It’s likely the sources on campus that will serve mashups anytime soon aren’t the “enterprise” systems but departmental or discipline-based ones (various GIS-based systems seem ripe to combine the Google and Yahoo maps of the world; text collections with things like Yahoo’s term extraction service, etc). And I don’t want to trivialize the challenges to security and privacy in accessing some of the enterprise data. But right now it feels like a brick wall – ask and you’ll get a strong ‘No’; not a considered one but the idea just rejected out of hand. But you know the trick; keep asking, eventually you’ll wear them down (or they’ll retire 😉 – SWL

On Using DSpace as a LOR

www.edtechpost.ca/gems/coppul-lor3.ppt

Pheew! Back home now after a hectic (for me) week of travelling and talking, one of which was a talk I gave to the Council of Prairie and Pacific University Libraries (COPPUL) Distance Education Forum on the feasibility of using DSpace as a general learning object repository.

I have been pretty hard on this idea in the past, so I was glad to be given the opportunity to revisit the idea in more depth. And while it might not seem so from the slides, I actually found myself softening to the idea, in part because of some innovations from MIT and others to accomodate learning materials. But my main message, which was perhaps buried a bit at the end of the talk, was that it is one thing to evaluate DSpace against an abstract set of functionality that a LOR should have, (which is kind of what I did here) and quite another to say that it will solve the problems of finding, sharing, remixing and reusing learning content, a question some would say has already been asked and answered a few times. – SWL

RepoMMan Project

http://www.hull.ac.uk/esig/repomman/index.html

To keep going on the apparent ‘open source repository’ theme today, this JISC-funded project appears to be using Fedora and Sakai to investigate automated population of metadata based on contextual information provided by the portal environment, to examine the boundaries of personal versus institutional digital resource management, and to develop some workflow aroud common repository tasks based around Service Oriented Architecture. Phew. Fedora is a different approach than DSpace, though both originated from the library/institutional repository world, and yet in my earlier investigations it too seemed to also have some limitations to its effectiveness as a LOR. Early days yet for this project, but maybe some promise in moving it closer to serve those (and other) needs better. And you just gotta love the name. – SWL

Presentation on Archiving Course Websites to DSpace, Using a Content Packaging Profile & Web Services

http://cwspace.mit.edu/docs/ProjectMgt/Reports/
DLF-Spring2006/MIT-CWSpace-DLF-Spring2006.ppt.htm

For a long time I’ve been asked about available open source learning object repositories, and specifically about whether DSpace could work as a LOR. My answer regarding DSpace, up to now, has always been – well it depends on what your use cases are. If you didn’t care about things like IMS Content Packages and learning object metadata, then sure, maybe it could work, but it always seemed like a stretch, that those asking the question were looking to adopt a system because of its license but not because of its functionality.

In this regards, I had always held out some hope on the CWSpace project. As I have mentioned before, CWSpace is a project looking to archive the educational materials found in MIT OpenCourseWare using DSpace technology, and in so doing provide a valueable extension in functionality to DSpace itself.

With the presentation above it looks like they are making some progress – it details how they plan to deal with two major issues, mapping OCW’s object model to DSpace’s object model, and improving the interfaces to DSpace to make them more conducive to working with living (not archived) materials. NOTE: this presentation really only useful for standards geeks and other interoperability weenies (like myself, I guess). Not for the faint of heart.

It’s unclear to me whether they are shipping code yet for this, but it is still encouraging to see some progress, and for me really encouraging to see the library/institutional repository crowd take seriously the differences between their standard use cases and the ones from the LOR world; a big step forward from the red flag that’s been waving from the DSpace site for years claiming it can accomodate ‘learning objects’ (whatever that meant). – SWL

Ariadne Decennial Issue Out

http://www.ariadne.ac.uk/issue46/

Plenty good reads in the latest Ariadne magazine, the 10th anniversary edition, including pieces by Lorcan Dempsey and Clifford Lynch. Although Ariadne aims to be a “web magazine for information professionals in archives, libraries and museums” I always find at least a couple of articles directly relating to (if not directly referring to) issues in elearning. – SWL

BC ELN Using Blogs to Brainstorm their Strategic Planning process

http://bceln.blogspot.com/2006/01/
about-electronic-brainstorm-wild-ideas_31.html

Kudos to the BC Electronic Library Network for trying the interesting experiment of using Blogger as a mechanism to facilitate collectively brainstorming by their members. As I understand the model, staff from the participating partner libraries are invited to either comment on posts, or log into Blogger using accounts that ELN has set up for them that can make new posts on the main brainstorming blog, all of which will be fed into the larger Strategic Planning process. Nice model for a consortia to use as it keeps it open and public but hopefully still provides some autonomy and flow. Will be interesting to see how it works, and at the very least may be a step in exposing some additional librarians to the technology (not that most of them need this, we are lucky to have an amazingly sophisticated bunch in our province.) – SWL

LibraryLookup Greasemonkey Script for Victoria Public Library

http://www.edtechpost.ca/gems/GVPL_LibraryLookup.user.js

O.k., o.k. already! I am showing my age/lameness. In my exuberance over learning that my local library’s catalogue was now searchable via Jon Udell’s famous LibraryLookup bookmarklet (and trust me, I can get pretty exuberant), I forgot how terribly passe and 2003 that was. Apparently time has moved on since then; last year Udell released a small Greasemonkey script that searches your libraries catalogue in the background and adds a link if the book is available right on the Amazon page.

If you happen to live in Victoria, you can grab my ever so-slightly modified version of that script at the URL above and install it in your Greasemonkey-enabled Firefox browser. And turns out this post isn’t so out of date after all – if you are really keen, Udell released the extra souped-up version of the script (which requires you to get your own Amazon-API) on January 26th. Soo coool! All praise Udell. – SWL