IMS Compliance Program

http://www.imsproject.org/conformance/index.html

I could be wrong, but this seems new, and welcome at that. IMS has announced this new Compliance Program which outlines methods for developers of content, services and applications to provide evidence to support conformance claims based on self testing, and in so doing rate the claim of “IMS Conformant.” While we have had the ability to verify SCORM conformance now for some time, this is the first, as far as I know, where claims concerning IMS specifications that aren’t included in SCORM (and there are many) will have some form of verification applied to them. Announcements to this effect should start to circulate later this year as the first products work their way through the process. – SWL

“Monoliths,” APIs and Extensability – A presentation on the past and future directions of CMS

http://www.edtechpost.ca/gems/CMS_overview.ppt

I was very fortunate recently to deliver the above talk to a CMS task Force at UBC on the overall lay of the CMS land. It seems relevant to share it here, especially in light of a recent post by James Farmer on integrating open source pieces with WebCT, and the great follow up by Michael Feldstein.

I think Michael’s read is mostly accurate. As I try to lay out in the presentation, CMS have evolved as a series of “wrappers” around a set of applications, and there were good reasons for this innovation (it was an innovation when it began 10 years ago) in terms of handling scale and providing some stable service across all or many departments in a post-secondary institution within a limited budget.

But this model, which does tend towards monolithism, is now 10 years old; in part because of rapidly maturing alternative models (service oriented architectures and distributed applications development environments in general), in part because of pressure from customers to allow more pedagogically-driven choices in their tools, and in part because of challenges from Open Source and elsewhere, all of the CMS, be they commercial or open source, are moving, some slowly, some more quickly, towards increased extensability and interoperation with other tools. This is in my mind an undeniable trend, and the issue for organizations is not if this will happen, but instead a question of how best to obtain the core services and acceptable level or “service” while increasing the amount of flexbility and choice for instructors and students, and at the same time not increasing the cost (and hopefully decreasing it if you’re really adept).

I don’t think the commercial CMS companies are going away, at least not anytime soon. There are still many organizations (often small ones, but not always) for whom more sophisticated ‘elearning architecture’ approaches, “best-of-breed,” or the choices (and demands) facilitated by open source are not (yet, maybe ever?) realistic choices. There is value in providing a set of tools (however limited you might feel these tools to be) in an integrated environment that can with relative ease tie into other parts of your infrastructure and for which you need to hire application administrators, not developers, to run. But even those customers want more freedom to make choices, and the CMS companies know this and are trying to mediate it without cutting off their own nose. But it’s also clear that they are under fire, and that many institutions will have the wherewithal to adopt or create what Michael terms a “Learning Management Operating System” into which they can insert, or on which they can build, different application choices and approaches. As I read it, the impetus behind OKI, and to the extent to which it embodies openly agreed upon APIs, Sakai, is a step in that direction. Michael’s predicition of a timeline (about 5 years) also seems about right; it will take a while for the implications of this approach to flow through and for the various systems needed to implement it to mature to the point where each implementation is not a large software development initiative of its own. But it is coming, and it will change the landscape of these systems considerably. – SWL

Open Knowledge Initiative Delivers XOSID Specification

http://www.imsglobal.org/news.html

The specifications geeks in the crowd will want to note (and probably have already seen) this joint announcement by IMS and OKI that they have released an XML binding or representation of OKI’s Open Service Interface Definitions (OSID), previously only officially available as Java APIs. Wilbert Kraan at CETIS has written an article which as usual does an excellent job detailing some of the implications of this work. – SWL

Educause Quarterly Article: Changing Course Management Systems

http://www.educause.edu/apps/eq/eqm05/eqm05210.asp

This is one of those articles that ranks in the “could have been important but ends up being too anecdotal” category. The authors are right in pointing to course conversion as both a potential cost issue and huge concern in switching CMS. All one has to do is ask a collection of system administrators or educational technologists who support almost any of the major CMS and they will nod knowingly, or start frothing at the mouth (depending on whether they’ve actually had to do it en masse or not).

But this is one area where higher ed suffers greatly from diverging from the corporate training world – we have no equivalent to an ADL to provide certification of these products on IMS Content Packaging (the larger scoped SCORM never having taken off within higher ed, for good reason.) So we are left to rely on the self-reporting of the CMS companies about their compliance with the Content Packaging and QTI schemes.

For a while the story that the churn of these specifications was what caused the lack of consistent implementations seemed plausible, but increasingly, less and less so. And fair or not, it’s no small part of the reason why on the LOR front, people are increasingly resistant to the notion of trusting their content to the big CMS vendors, as they have yet to exhibit content exports from their systems that will work well in their competitors’ systems.

The irony of this article is that D2L, one of the 2 companies mentioned here, has in fact done a lot of work to be able to convert content from their competitors’ systems as part of their business growth strategy. So if this is the case in trying to convert to them, one can only wonder what it might look like going betwen some of the others. IMS CP got started as a spec shortly after the formation of IMS in 1997, and was an early goal for good reason. From the customers’ perspective, it represented a major risk mitigation strategy to adopting one of these large systems (at a time when arguably the entire domain space was still in a very nascent state). 8 years later, one has got to ask, has it worked? Has the risk been mitigated? Ask your CMS admins and content developers, I’m sure they will tell you what they think. – SWL

WebCT announces participation in IMS Tools Interoperability Working Group

http://www.webct.com/service/
ViewContent?contentID=25561480

I’m sure the chattering masses (hey, I don’t exclude myself from this grouping) will have something to say about this one – yea, as the prophets foretold, in the year of the mark of the sign of the beast, the ‘evil empire’ took control, yada yada yada – but from where I’m sitting, if there’s a way that 3rd party learning tools can interoperate with different learning environments that is not based on proprietary APIs, that seems like a good step forward. If, instead, the Tools Interoperability specification becomes ‘Powerlinks for everyone,’ well then clearly the eschaton is near, so praise the lord and pass the hand grenades 😉 – SWL

New IMS Specification Download Pages and License

http://www.imsglobal.org/specificationdownload.cfm

IMS has a re-designed website, and a fairly unwelcomed addition to getting access to the specification documents. As they state on the individual specification pages “HTML documents may be viewed online, but may not be printed without permission. To download an electronic copy for printing, please go to the specification download page.” I guess this makes some sense. They aren’t restricting access, just asking people to agree to their license, which on the surface seems o.k., though it is not straight boilerplate, and systems implementors will want to think through what if any implications Section II, “License Terms for Implementation” have to them. (That said, I can’t wait to read Stephen’s reaction when he finds out he needs to re-enter his personal data everytime he wants a new copy or a different file) – SWL

CETIS ‘Interoperability in Action’ Video

http://www.x4l.org/video/index.shtml

Derek Morrison at Auricle points to this video from CETIS called ‘Interoperability in Action’ which is well worth a watch. It takes you through a step by step scenario of a user adding an object to the Intrallect Intralibrary-driven JORUM repository, and then a second user accessing that object, extending an existing course, and uploading that course to a variety of CMS/VLE.

At the very least, this illustrates one possible scenario and can serve as a starting point for discussion on other possible authoring and re-use scenarios (trust me, with my perfect 20/20 hindsight vision, you do want to start with scenarios).

Is this the last word in learning content authoring and reuse systems and scenarios? Of course not. It’s more like the first word – a start in demonstrating ’round trip’ content authoring and re-use using de jure standards, which is more than a lot of us can say. – SWL

Alt-i-Lab papers and highlights

http://members.imsglobal.org/forum/ims/
dispatch.cgi/f.altilabtech

Lucky for us all, the IMS have posted the supporting papers and slides in a publicly accessible area. Day Two of the sessions I was assigned to the Content working group. The promise of this group had been to tackle some of the questions laid out in the stimulating “Repository Management and Implementation” overview paper. But somewhat disappointedly to me, day one seemed given over mostly to talk of existing or emerging digital library standards, and while these are likely of interest and pertinent to others I found it hard to stay engaged.

Which led me on day two to migrate over to the somewhat oddly named “Tools” group, which ended up being a far more fascinating discussion on the world and problems of ‘learning design.’ Day Two in this group ended up being more of what I think of as a ‘working group’; real experts (James Dalziel, David Wiley, Bill Olivier and Gilbert Paquette, amongst others) hashing out real problems with the specification and what problems it is supposed to solve. It was both an honour and a learning experience to be able to sit in.

There really are a lot of worthwhile papers in this directory and they represent some of the state of the art thinking in the field, so take time going through them. Some of the powerpoint slides are also thorough enough to stand apart from their accompanying speeches; Brad Wheeler’s talk on how Open Source supports Open Specifications, Chris Etesse from Blackboard’s Ed Tech framework, and Fabrizio Cardinali from Giunti’s romp through the future of personalized learning all seem worthy of mention.

While there were lots of stimulating papers and talks, ultimately my first Alt-I-Lab will end up being remembered more because of the relationships begun and re-kindled. In addition to meeting fellow bloggers David Wiley and Raymond Yee for the first time, I spent a delightful evening with David Davies and Mark Stiles over a tasty Middle Eastern meal (go figure, easily the most popular restaurant in the small town of Redwood City.) All in all, well worth it, and hopefully I’ll find a way to next year’s event. – SWL

Joint IMS/CNI Whitepaper on interoperation between different types of ‘repositories’

http://www.imsglobal.org/
DLims_white_paper_publicdraft_1.pdf

I can only assume that the only reason someone didn’t point this paper out to me during my recent thrashing about concerning the difference between ‘institutional’ repositories and ‘learning object’ repositories is that, like me, they had never seen it before (or maybe you’re all just sadists and like to watch me flail about in public!)

Well in any case, hallelujah! This draft paper by Neil McLean and Clifford Lynch from June 28, 2003 is in my mind a model of clarity on the reasons for why these beasts are different (for one, the ‘transient’ versus ‘archival’ nature of their contents) but also why and how they need to interoperate.

Which is where I’ve landed on this topic – we need distinct types of repository software because they fill distinct end-user needs. But by implementing both common open protocols and using structured markup languages that can be mapped, we keep open the possibility of interoperating if and when this make sense. And I stress that last ‘if’ – the next piece in the puzzle I am waiting to see are convincing use cases, or even better yet convincing demonstrations, of search interfaces across catalogues of heterogeneous materials (e.g. records for books, ‘eprints’ and learning objects all at once) that don’t just confuse the matter entirely. – SWL