BC “Learning Content Strategies” meeting


Most of you will know one of my long term projects has been to help share online learning resources across BC and beyond. One of the main stumbling blocks to effective sharing has been the diverse (divisive?) environments in which the material are produced/housed/assembled (at last count there are at least 5 major flavours of LMS in our 26 institutions, as well as sundry other ones and non-LMS approaches as well).

I’ve always held that a top-down “standards” approach isn’t the answer; not only is my project not big enough to compell that kind of change, I am thoroughly sceptical of any of the current standards-based approaches to actually work across all of these LMS. Plus for any “solution” to be adopted, it needs to reflect local realities and priorities at institutions, and be seen to solve local problems before it (or at least, as it) solves the ones of sharing outside the institution.

Add to this the fact that I am loathe to highlight only solutions that would simply further entrench LMS-based solutions or that don’t take into account the learning we’ve all been doing about the role of openness, or the new approaches which social software and other loosely-coupled technologies can offer, and we faced a quandry. How to frame a meeting that brought up the issues, highlighted the common pain points, and ALSO presented both LMS-oriented and other approaches to learning content/learning environments?

Thanks to a suggestion from Michelle Lamberson, we decided that framing the day around the conceit of “Learning Content Strategies” was the perfect way to bring all of this together (seems obvious now, but we struggled for a while for the right frame.)

After a very brief intro from me, we kicked off the day with an hour long discussion of common problems and challenges around learning content. I facilitated this, getting the discussion going with a set of questions that people answered using iClickers. (As an aside, while I recognize lots of potential problems with clickers, I was frankly blown away by how well the iClicker technology itself worked. Truly simple to use and functioned flawlessly.) It felt to me like a good start to highlighting some of the common problems people are facing and laid the groundwork for the rest of the day.
The next step was to showcase work of a few institutions around the province who, in my experience, have developed different approaches to developing content indepedant of their LMS environments. Katy Chan from UVic, Enid McCauley from Thompson Rivers and Rob Peregoodoff from Vancouver Island University all graciously shared with us some insight into their content development processes and the factors that shaped their choices. The important thing that came out of this for me is that none of these approaches is the “right” one, just the “right” one for their context – they ranged from standalone HTML development, to industrial XML production, to Macromedia Contribute, and each had its strengths but also possibly its complications. It’s a tradeoff, you see, like any choice. But they certainly gave their peers in the audience lots to think about.

After lunch I trotted out my dog and pony show, highlighting some of our offerings from BCcampus as well as launching the new Free leaning site. I still live in hope that some of these offerings will resonate with our system partners (a boy can dream) and already there seems to be some renewed interest, which is heartening.

The afternoon was given over to a completely different set of approaches to the problem. Like I said, while the vast majority of our institutions use LMS as their primary online learning platform, that is not the future, or at least, not the future I hope for, so we wanted to expose people to some approaches already happening in the province that are outside the LMS, ones that used loosely-coupled approaches or “openness” as an enabler.

First up was Brian Lamb and Novak Rogic from UBC, and I’m pretty sure their demos of moving content to and fro using WordPress, Mediawiki, their fabulous “JSON includes” and “Mediawiki embeds” techniques left some jaws dropped on the floor. A hard act to follow indeed, but Grant Potter from UNBC did a great job, showing off their own work with blogs and wikis for shared and distributed content development.

Finally, since all the presentations to date had been from a somewhat “institutional” perspective, I thought it important to get an instructor up there to show what a single person can do with the current technologies, and who better to do so than Richard Smith from SFU. Worried though he claimed to be about following @brlamb and co. on stage, he needn’t have – his session was a blast, showing off many web 2.0 tools that he uses with his students. I think some of the biggest value from that session was challenging the notions of the handhel instructor, of the assumption that media must have high production values to be useful, and that this tech is just for “distance” learners. Richard basically made the case that he is able to offer more than 100% seats in his class by always having remote and archived materials for the students. I’m pretty sure this turned more than a few heads.

In the end, my nicely laid plans for orderly rountable discussions were thrown out the window, and I tried as best I could to facilitate a whole room discussion on the fly. I think it went pretty well;  we tore through many of the real challenges people face, from single sign-on to copyright, offering some new ways to think about these and identifying what I hope are some things we can keep working on together as a province.

In all honesty, this meeting went as well, even better, than I had hoped. My goal was not to propose a single solution (as I do not believe there is just one solution) but to bring the problems to light, to get people to acknowledge they exist, and to give them a chance to see some different ways to deal with them, and talk amongst themselves. My experience with this group and with the ed tech professionals in BC in general is, give them a chance to talk and share and don’t be surprised at the number of collaborations and shared solutions that emerge. I have great hope that this is just the start of the conversation and of renewed efforts. – SWL

My Recent OpenID Preso


Somehow I think this is likely of limited value if you are reading this blog. I don’t think I really know that many people who don’t know what OpenID is or why we in higher ed should be paying attention to it. But when I gave this talk during an ‘student authentication’ session at the recent WCET conference in Atlanta, a scant 2 people in a room of 50 put their hands up when asked if they had heard of OpenID. So maybe there’s still some folks who might find this useful. Anyways, here it is, hope it helps. (As an aside, I was presenting alongside some scary biometrics ‘1984’ remote proctoring tech in a session entitled “Student Authentication: Do You Know Who is in Your Classroom?” My joke, which I didn’t dare make to the crowd, was that I thought the session was titled “OpenId – Are students still the same people when they are in your classroom?”) – SWL

New Round of BC’s Online Program Development Fund


So while this may be of interest mostly to local readers, I thought I’d post on it because I think there’s a few things we are doing in this round that may be of wider interest.

This is the 5th round of BC’s Online Program Development Fund (OPDF), a province-wide fund that BCcampus (my employers) administer on behalf of the provincial Ministry of Advanced Education.

This year’s $750K call is notable, I think, first off for it’s inclusion of “Co-created Content” as one of the funding categories. This is an effort to acknowledge this phenomenom and support the co-creation of learning resources by students and faculty under a license that seeks to offer these for successive groups of students to build on.

The second thing possibly of more general interest is a new inclusion which asks the proponents to describe their strategy for seeking out existing freely reusable learning resources that could be leveraged in their project. This is an effort to promote one of the values underlying the fund, that good, free content should be reused where appropriate. The call does not dictate that existing content must be reused, but instead simply asks proponents what efforts they have made in this direction. It also does not stipulate where this content might come from – sure, we’d love people to look in SOL*R for suitable reusable content, but we hope they’ll bring in pieces from the thousands of other places you can find free learninng resources online.

Finally, another small innovation in the call is around how to promote interoperability practices. Like it or not, the majority of the content that’s been produced through past funds has been done in one of the course management systems supported in our province (WebCT 4, 6 and Vista, Blackboard, Desire2Learn and Moodle and a few home-grown ones are the current crop). While it is seductive to think one could simply specify a “standard” for content, this is for me problematic because a) it would be a top down approach that would likely not reflect the actual practices in the province and b) almost certainly wouldn’t simply “just work” anyways because of the uneven support across the CMS for even basic specs like Content Packaging. Instead, this call is an attempt to get people to at least factor the issue into their planning and describe how they plan to address it. From my perspective there is not ONE way to get content to work across these systems, nor does it have to even be in any of these systems at all. What it does need to be is as useful as possible to other faculty in the province (and ideally out of it too, but the funds’ mandate is specifically to foster content development in the province) regardless of the choices they make on their own, and the call simply asks people to describe their strategy to achieve this.

Blogging about “official” work stuff always makes me uncomfortable – not only have I been known to cock up before, it’s not an “official” part of my job. As is always the case, the words here represent my personal views and do not necessarily reflect those of my employer. If you want to know more about the OPDF, then read the call directly, don’t just take my word on it! – SWL

Attending Learning Impact 2007 – Reworked Schedule for Tuesday Strands

Google Docs & Spreadsheets – Learning Impact 2007 – Tuesday Strands

I am in Vancouver attending the IMS Learning Impact (formerly Alt-i Lab) conference until Wednesday. The conference goes until Thursday but I am giving a talk at the BCLA conference as well as visiting with Brian at UBC so will miss the last day.

So far it has been about par for the course for a ‘biggish’ educational technology conference. Stunningly dull keynotes but lots of great conversations with some very smart people, in truth really the reason I am here.

Yet what’s frustrating to me is that for a group dedicated to using networks and computers for learning, there is no innovation going on in how the conference itself is conducted. There is no online directory of attendees (at least not one I can find), no apparent backchannnel or other ways for attendees to network digitally. Which is why I am posting this here. Above is a link to a reworked schedule for today, done in Google docs, which shows the parallel strands as, well, parallel strands, not spearated on individual pages like on the website. Uggh! There are actually quite a few sessions of interest, but instead of having to flip back and forth, you can just see the strands side-by-side and decide where you want to be. Hardly innovative, but apparently nobody thought to do it. I used Excel’s “import data from the web” capabilities (which if you’ve never tried, provides another great way to scavange data for mashups) and then simply imported that doc into Google docs. Easy peasy. Here’s hoping foor some good sessions for the rest of the day! – SWL

bfree – export courses from Blackboard


Another useful pointer from Michael Roy at Wesleyan’s Academic Commons, bFree is a tool built by the University of North Carolina at Chapel Hill. It allows you to open a Blackboard course export or archive file, select the files you want and then export these as an independent website.

This might not seem like a lot to some, especially with supposedly mature content interoperability specifications to ease the movement of content between CMS, but frankly I did a little dance when I saw this.

My issue hasn’t actually been with Blackboard’s CMS (no one in B.C. runs it) but with the product they acquired, WebCT. Specifically CE6 and Vista. I run a repository service for the province. We have funded both individual resources as well as full courses to be shared through this service. In CE6, there is currently no way to get a full course worth of content out of the system at one time in a way that works with any other systems. You can take a ‘module’ at a time as an IMS Content Package, but not the whole course. It’s not that this wouldn’t be feasible; the exact same state of affairs reigned over CE4 until it came time to get everyone off that platform when suddenly a tool that could export the entire course as an IMS package was created (the administrative Content Migration Utility). And it’s not like I am waiting around for WebCT/Blackboard to fix this; I was willing to develop a powerlink that extracted the entire set of content modules at once in a format that could be used in other systems. Except, much to my chagrin, I learned that WebCT/Blackboard had systematically left out the module export functionality from their API, and there are no plans to ever include it. Meaning there is no programmatic access to export content packages out of WebCT CE6. If you want to move an entire course worth of content, do it one module at a time.

This is probably enough that they can claim to not be playing the content lock-in game, but if I were at an institution that had recently adopted WebCT CE6, I’d be asking what the exit strategy from the product was (you do have one, right? because it won’t be long before you’ll have to have one) and shudder to think it amounts to “we’ll wait until WebCT offers us a good solution.” – SWL

Short Video on Common Cartridge


If you’ve ever tried to export a course from an existing CMS in a ‘specifications-‘compliant format you’ll know that currently the best you can likely do is get the content as IMS Content Packages and hopefully the quizzes separately in IMS QTI format. Leaving the rest of the course (discussion forums, assignments, etc) embedded in the original location and needing to be recreated from scratch.

IMS COmmon Cartridge, recently demonstrated in action between Angel, Sakai, Blackboard and WebCT at the Alt-i-lab 2006 sessions, is the attempt to remedy this problem, to create a common standard for full course import and export between CMS and useful to publishers.

Above you can see a short video describing its promise and the effort that went in around it, and you can find out more about it on the IMS Working Group page. It is a worthy problem to solve because IMS CP just doesn’t do the full job. Let’s hope some lessons have been learned over the subsequent years since its advent and the support for Common Cartridge is more, let’s say, even, than it has been for IMS CP. – SWL

alt-i-lab 2006 presentations available


If you’re an elearning standards geek then there’s lots to sift through in this collection of presentations from the recent Alt-i-lab 2006 sessions in Indiana. And if you’re not, then be warned that forcing yourself to go through these is likely to aggrevate any masochistic tendencies you may already harbour.

Part of me really wants some of these developments to come true, to deliver the promised ‘plug and play’ elearning environments described herein, and in my rational moments I know that 10 years really isn’t that long for a field like this to coalesce around an open set of interoperability specs. And yet it would be hard to fault a newcomer looking at these presentations for wondering if this represents what is still to be done, how anyone manages to develop quality online learning experiences now (and how many PhDs will be required to operate the CMS of the future)? SWL

Update – Septmeber 15, 2006 Don’t you just hate it when people reorganize their websites and don’t use mod_rewrite and other tricks to make the old URLS work. Note the new URL for the presentation, AND the requirement to sign up for a free account to get at it. Ickk!

ECAR Report – Identity Management in Higher Education


Being just a pleeb who doesn’t work for anyone with an EACR membership, I’ve only been able to read the public ‘key findings’ document from this recent ECAR study, “Identity Management in Higher Education: A Baseline Study” (and hey, I’m not really complaining that much, it is nice that they make the highlights available for free). So maybe the fuller report speaks to some of my concerns, but what I found striking about this report was the apparent disregard in the institutions surveyed for many of the internet-wide identity projects currently struggling to be borne (e.g. sxip, openID, etc.) Actually, that’s not surprising at all, we’ve longed seemed to prefer to invent (or at times re-invent) our own wheels in higher education, thinking our situations to be so different or needing to ‘own’ the results for academic or political reasons. Where this gets interesting for me, though, is the whole push within what I call the ‘loosely-coupled learning tools’ camps for instructors and students to simply adopt free or centrally provided services that exist out on the internet already (e.g. flickr, blogger, etc.) This push is not going away, nor should it, but it currently drives many IT directors and other campus service providers nuts.

It was about 2 months ago now, during the course of a private conversation about ‘loosely coupled or openly integratable leearning management systems,’ that I half-jokingly threw out the intellectual stink bomb that campuses could in the future easily turn to service providers like Google or Yahoo or Microsoft for their central identity services. It was literally a few days later that announcements about Gmail offering domain-wide hosting services (and I thought Microsoft too, but maybe this was old news, I can’t find the reference). Don’t get me wrong, I am not ADVOCATING this as a solution, only saying that a) you will see more offers like this from big ‘free’ players outside your organization to start coming ‘inside’ your organization, and along with the free services come implications of who owns what and where should it reside, so you had better already thought through how to talk to your CIO/CEO/President about this, because on a sheer cost basis it is going to be hard to justify why not and b) it is a GOOD thing for institutions to start to consider that their students have lives and identities that preceed and extend far beyond the time they attend their institutions, and that being able to easily fit into that student’s online identity (rather than the other way around) is going to be an increasing expectation.

So, good overview of the state of affairs in higher ed, and maybe the full report touches on some of these issues, but it didn’t read like a vision for the future for me. – SWL

Database of JISC-funded / ELF-related projects


We don’t really have a national-level funding body for higher ed in Canada – education is considered a provincial jurisdiction, and while there are a few bodies that have tried to help coordinate activites at a national level, in truth it is hard not to look on with envy at our commonwealth cousins in Australia and the U.K. and the seemingly comprehensive strategies for implementing eLearning frameworks that their national bodies have developed. (That said, the flip side of the argument, which I think is very valid, is that when there is no ‘central’ body, it hopefully forces your solutions to be more grassroots and come from the system itself, not be imposed upon them).

This page lists many of the JISC-funded ones in the UK. I have no idea if it is officially ok to link to this, but based on the principle of “if I can point to a public URL on the web, then it is bloggable” here it is. In addition to getting a sense for the breadth of projects currently being funded under the ‘ELF’ rubric, you can get a combined RSS feed for all the projects listed here. – SWL

Update: Apparently the more offial list of ELF projects, and one less likely to disappear, is available at http://www.elframework.org/projects/, though I couldn’t see rolled up RSS feeds, which is one thing I liked about the ‘experimental’ page.