Trying out Zeega for an #OpenEd12 Recap

A few days ago a new storytelling/mashup/presentation tool named Zeega came across my RSS reader. It is still in private alpha (not even beta!) but I was intrigued and so submitted a request for an account, and to my pleasant surprise the next day had an account.

Zeega is slightly more complicated than your standard presentation tool – not a lot more, but it uses the idea of different “sequences” that can branch, and so a quick view of some short video tutorials was very helpful to get going with the software. I can see how one could use this as a start forward presentation tool quite easily, but they also included a series of examples of other projects people had made to show how this can be much more than a linear presentation tool (I quite liked the one on Geodesic Domes and drew some inspiration from it for my own.)

Another thing that makes Zeega stand out is its media harvesting mechanism – you can link zeega up with your dropbox account, but more interesting is the bookmarklet, which lets you add media from flickr, youtube and soundcloud (or indeed any regular media asset) to your “library” and once inserted into a resulting animation, includes a reference back to the original (a nice-to-have feature I could see in the future would be to choose only CC licensed materials, and also to allow users to specify how attributions should be made, but for now the current way works great.) Once you’ve gathered materials into your library, it’s a simple thing to drag and drop them on any frame of your show, where they can then act as links, back-ground soundtracks, etc. Zeega also has maps integrated with it, a feature I didn’t explore in the first story I created but which I can see adding a useful element at times.

Zeega is definitely still in alpha, but is another great example of how far web-based applications have come. It wasn’t that long ago that the same sort of functionality could only be found in a thick desktop client, and one that was no doubt web unawares. But even in its early stages, zeega is another example of a new bread of mashup storytelling tool that I believe any instructor with a bit of gumption could use to create much more engaging materials, or any student for that matter. It gets both the authoring and workflow pieces mostly right. Check out the example I created as my test drive, my own recap of #opened12 using sounds and images from all over the web.

Another Half-baked Idea (in which Scott dangerously treads on librarian toes)- OPACs, OA and Wikipedia

Back in December I had another one of my half-baked ideas that I want to run by the larger community before doing much more on it. One day, while reading a wikipedia article, I thought “This is a well known topic (I can’t recall which now) – wouldn’t it be great if students could automatically be prompted that there were full, scholarly BOOKS in their library on this topic in addition to this brief wikipedia article?” (Don’t get me wrong, I LOVE wikipedia, and to get the overview there is often nothing better, but in some instances it offers only a  brief glimpse of a deep subject, as is an encyclopedia’s proper role.)

Now you all know of my fondness for client-side mashups and augmenting your web experience with OER; this passion was kindled by projects like Jon Udell’s LibraryLookup bookmarklet (annotate Amazon book pages with links to your local library to see if the book is in) and COSL’s OER Recommender (later Folksemantic, a script that annotates pages with links to relevant Open Educational Resources.) What I love about these and similar projects is that they augment your existing workflow and don’t aim at perfection, just to be “good enough.” In all cases, what these types of automated annotation services require are two things: 1) some “knowledge” about the “subject” they are trying to annotate (in the LibraryLookup case the ISBN in the URL, with folksemantic – I’ve never been clear!) and; 2) a source to query  (your local library OPAC/a database of tagged OER resources) hopefully in a wel structured way with an easily parseable response.

So what struck me while looking at the wikipedia page is that (following the Principle of Good Enough) the URLs by and large follow a standard pattern (e.g. http://en.wikipedia.org/wiki/%page_name%) where %page_name% is very often a usable “keyword” for a search of some system (condition #1 above) and that library OPACs contain a metric shitload of curated metadata including both keyword and title fields (close to condition #2 above.)

So the first iteration of the idea was “Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links to your local library catalog of books on that subject.”

Now one of the weaknesses of the LibraryLookup approach was the need for a localized version of the script for each OPAC it needed to talk to. Means it doesn’t spread virally as well as it might and is often limited to tech savvy users. So the next obvious (well at least to this non-librarian) iteration was

“Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links to query WorldCat instead”

in the hopes of performing a single query that can then be localized by the user adding their location data in WorldCat. But… as a number of librarian friends who I ran this by pointed out, WorldCat is pay-to-play for libraries, and in BC at least does not have wide coverage at all. Still, a step in the right direction, because further discussion brought me to the last iteration of…

… “Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links that instead of annotating with an OPAC/book references, used fully open resources, but instead of OER (which folksemantic already does), use a service like OAIster with it’s catalogue of 23 million Open Access articles and thesis.”

Liking this idea more and more, I then realized that OAIster had since been incorporated into WorldCat (though I must admit, not finding it very intuitive to figure out how to query *just* OAIster/open access resources).

So this is where I got to, but I was fortunate to talk through the idea with two fantastic colleagues from the library world, Paul Joseph from UBC and Gordon Coleman from BC’s Electronic Library Network. And I am glad I did, because while they didn’t completely squash this idea, they did refer me to a large number of possible solutions and approaches in the library world to look at.

While it’s not “client side,” (which for me is not just a nicety but actually an increasingly critical implementation detail) a small tweak to WorldCat’s keyword search widget embedded in mediawiki/wikipedia looks like it would do the trick.

Paul pointed me towards an existing toolbar, LibX, that is open source, customizable by institution, and extensible that could (and who knows, maybe already does) easily be extended to do this by the looks of it.

Paul also reminded me of the HathiTrust as another potential queryable source, growing all the time.

And the discussion also clue’d me in to the existence of the OpenURL gateway service, which seems very much to solve the issue of localized versions of the librarylookup bookmarklet and the like.

So… is this worth pursuing? Quite possibly not – it seems like pretty well covered ground by the libraries, as it should be, and it’s the type of idea that if it hasn’t been done, I am MORE than happy for someone else to run with it. I am looking for tractable problems like this to ship code on, but I’m just as happy when these ideas inspire others to make improvements to their existing projects. The important things to me are:

  • approaches which meet the users where they already are (in this case Wikipedia or potentially mediawiki)
  • approaches that don’t let existing mounds of expensive metadata go to waste (heck, might as well use it!)
  • approaches that place personalization aspects on the client side; increasingly we will be surfing a “personalized’ web, but approaches that require you to store extensive information *on their servers* in order to get that effect are less desireable; the client is a perfect spot, under the end users control (look, I’m not naive)
  • approaches that fit into existing workflow in the “good enough” or 80/20 approach

I think this fits all of the above; if you have other criteria I’d love to hear them (certainly these aren’t MY only ones either.) If you do know where this idea has been implemented, please let me know. And if my unschooled approach to the wonderful world of online library services ticks any librarians off, my sincerest apologies – I’ve always said that “librarians ae the goal tenders of our institutions” (I was a defence man, this is a big compliment) and my only goal is to bridge what feels like a massive divide between educational technologists, librarians and, most importantly, learners. – SWL

PLE Workshop/Mashing up your PLE session

http://edtechpost.wikispaces.com/PLE+workshop

Yesterday it was my IMMENSE privilege to co-facilitate a pre-conference workshop with Jared Stein and Chris Lott on “Weaving your own PLE.” I think for all three of us it was an experiment, developed at a distance through Google docs, wikispaces and a couple of Skype calls. Ultimately, it is up to the participants to judge if it was a success, and the proof will be in how many of them continue on with what they started over that day, but it felt like it went pretty well.

My contribution was a 2 hour session on “Mashing up your PLE.” We had decided to split it into 2 streams, and the (suggested self-)selelction criteria was prior experience reading and writing blogs (and, sort of as an obvious corollary, awareness of RSS.)

(As an aside – we are WELL aware of the issues that surround this approach. We made every effort to emphasize: personal choice; that PLEs involve people and resources not on the network; the PEOPLE are critical, and that they need to grow their OWN networks, not adopt someone else’s; etc. But our goal was to get people who were not swimming in the flow, but who will increasingly be met by students and colleagues who ARE, to start, somewhere, anywhere. To take the plunge, with as many supports as we could muster, in the context of a pre-conference f2f workshop, to sustain it long term.)

I picked 4 “mashup” skills or techniques that I think can help people who already partly immersed in networked learning to be more effective networked learners:

It was a lot to get through in under 2 hours. I know I blew through a lot of stuff and that I often speak too quickly when I present, partly out of nerves, partly for the same reason that I am an exuberant gesticulator – this stuff gets me excited! But I did see lots of eyes lighting up: feed2js always blows people away, you can see the wheels turning of how they can use it; the google spreadsheet “importHTML=” trick works like magic, and while people don’t immediately grok how this is SO much more powerful than importing a page in Excel, when you show them the “More Options” publishing options suddenly you can see the penny drop; I think I sold a few people on “constrained search engines” but it’s Google Coop On-the-Fly that really gets the jaws dropping; and finally, both OER Recommender and the WorldCat/Amazon greasemonkey script provide pretty vivid examples of how you can bring educational resources directly INTO your everyday web experience with NO EXTRA EFFORT!

My only regret is that in my current position (and in my current practice) I typically only get to do these kind of sessions once before I move on. Which is a shame, because in this particular case I have a ton of ideas of how to improve it. For instance, taking a leave out of Alan (and many others’) book, I realized that if I had connected there 4 pieces in more of a story, it would make it more compelling. And in terms of making it educationally more effective, I think that forming the room into small groups, showing them a number of different techniques in each of these areas, and then setting them a problem to solve together (e.g. “figure out how to scrape this site. Feel free to use Google spreadsheets, Yahoo pipes, Dapper, or any other method you think will work”) would make this way more memorable and effective. But ultimately require more time.

Anyways, this was a ton of fun to work on if only to once again get a chance to work through some ideas and practice of my own, which is ultimately what keeps driving me to do new presentations each time, they are one of my only “teaching” opportunities I have right now and allow me to work out stuff that I’d otherwise not get a chance to dig into. – SWL 

My OpenEd Demonstrator – Augmenting OER with Client-Side Tools

http://www.edtechpost.ca/gems/opened.htm

Back in June I submitted a paper proposal to OpenEd 2007. In August, the day before I was to go camping, I heard back that while my proposal hadn’t been accepted, I was invited to participate in a ‘Demonstrator’ session (basically a Poster session set up at the end of Day 2).

I have to admit that I was a bit crushed at first. But very quickly I turned this around; not only did I realize that this was a good decision by the organizers in terms of my proposals’ content and the general tenor of the accepted presentations, I also realized that doing a ‘demonstrator’ in the right way would give me an opportunity to reach a wider audience than doing a straight presentation.

So the result is this 10 minute Flash movie demonstrating a few of the ways learners can augment their experience of OERs (in fact the web in general) using client-side (mostly) tools that they control. This idea of client-side tools (by which I mean extensions, bookmarklets and Greasemonkey scripts) really appeals to me because it starts to shift the locus of control back to the learner and away from centrally provisioned server tools. The point in doing this? Well, in addition to simply raising awareness of these techniques, the point in presenting this specifically at OpenEd is as a small challenge to what I see as a past tendency towards monolithic (and not mashup friendly) content in some of the formal OER projects, and to counter what seems to me like the chauvinism that people are going to consume your OER courses on your site, in the way you dictate. In my mind, OERs will really start to succeed when they can augment our experience of the learning space that is the entire internet, instead of sitting off to the side and requiring learners to self-identify that they want an OER. As I say in my final slide “People need their OER even when they are not on an OER site!”

Was this a successful experiment? Well, in my mind, not totally. I really wanted to show more examples, for instance like WikiProxy, of Greasemonkey scripts that dynamically link to supplemental resources without a lot of semantic underpinnings. You know, loosely connected. But I couldn’t get WikiProxy working properly, ran out of time in my own development efforts (but more on this soon) and as much as I think the new OER Recommender by COSL is a good illustration of this technique, it felt kind of superfluous to demo this where it was actually developed 😉

I also think one can validly challenge the extent to which the techniques I demonstrate actually enhance learning. I think they do, but I can see how others would disagree. So my question to you – what other ‘client side enhancements’ have you found that learners can use independently to augment existing coontent and improve their learning experience on the web. I am really interested to hear more ideas!

There are other pieces that I didn’t get to show but that if you are interested you can find out more in my del.icio.us links for the presentation. Specifically, how you can perform some of these tricks in other browsers (through things like Turnabout and Creammonkey), how organizations can distribute these tools through mechanisms like custom toolbars, customized portable apps on cheap thumb drives and how yoyu can turn Greasemonkey scripts into proper extensions. Enjoy! – SWL

My search is over – Yahoo Pipe to constrain search to linked to pages

http://pipes.yahoo.com/pipes/zhmqsw_52xG4gUz8e_gC8A/

Wouldn’t you know it, a few seconds after I finish commenting on Tony Hirst’s blog that my personal quest has been a way to dynamically constrain a search to only those pages linked on any webpage, I actually read the entire post and learn that he had already done this! A simultaneous ‘Doh!’ and ‘Hooray!’

From a usability perspective what I’ve always wanted to see was this as a bookmarklet that passes the link the URL from whatever link containing page you’re on, so I’ll look into that, but Tony has demonstrated how this is seemingly quite straightforward with Yahoo Pipes.

Why is this important to me? Think of all of the collections of links out there, people who have painstakingly vetted links on a particular subject, collected only those they felt were important. With one click you can search just those linked sites. It can definitely be argued that this always runs the risk of missing stuff outside of those constrianed sites, but there’s times when limiting the context is useful and important. – SWL