OLNet Fellowship – Week 2 Reflections

So I’m a little behind on this (since I’m now in Week 3) but still wanted to jot a few notes down, as I had some fantastic discussions last week.

Meeting with JORUM – Using DSpace as a Learning Content Repository

One of the highlights last week was a trip to Manchester to meet with Gareth Waller and Laura Shaw of the JORUM project. Back when we started our own repository work in BC I liaised with folks from JORUM, setting up a few conference calls to share details on how we were tackling our similar problems, but we’d fallen out of touch, and facilitated through meeting Jackie Carter last January at ELI, this was a chance to renew the connections.

One reason I wanted to meet was that JORUM’s model is very similar to our own, so I wanted to see if my ideas on how to track OERs after they’ve been downloaded from a repository resonated with them, and whether they were already employing some other technique to do so. Turns out they were of interest and to date these are (as I had suspected) numbers they were not currently collecting but eager to have, so that was a useful vote of confidence.

But the other major reason I had for my visit was to learn more about the work they had done on JORUM Open to turn DSpace into a platform for sharing learning resources. It had been almost 4 years since I last concluded that while you could try to jimmy a LOR into DSpace, it wasn’t an ideal fit – DSpace “out of the box” really caters to the deposit and archiving of documents but isn’t optimized to deal with the specialized (read “arcane”) formats of learning content.

Which is why I wanted to see how the JORUM folks were doing it; sure enough, Gareth Waller has coded many new features into the product that make it a much better fit to handle “learning” content. While I’m not yet certain it provides a simple exit strategy out of our existing commercial platform, the work Gareth has done represents a big step towards that, and I would highly recommend any other institutions already involved with using DSpace specifically for learning content to contact him.

Planning for Succession – How to enable what comes after the LMS

The rest of the week was spent with my nose to the grindstone trying to code up the hooks to incorporate piwik tracking codes into resources uploaded to SOL*R. As a treat that weekend, I travelled to Cardiff, Wales, my old stomping grounds from my Graduate degree days, to spend 3 nights with Martin Weller and his family.

We spent most of the weekend biking around the city and a good deal of time in Llandaff Fields, near Martin’s home. On Sunday afternoon we did a large circuit of the park while Martin’s daughter was at riding lessons, and it was one of those settings and strolls that beg for epic conversation. And this did not disappoint. Two ideas in particular resonated with me.

The first was the notion of “succession” of technology, to borrow a metaphor from ecology. Martin has written on this a number of times before, both in articles and in his book on VLEs. But we were discussing it in the context of the recent acquisition of Wimba and Elluminate by Blackboard (as well as in light of my recent reading of Lanier’s “You are not a gadget” in which he discusses the idea of “technological lock-in” and “sedimentation”), so put a slightly new spin on it, I think.

Now metaphors can both enable and obscure, but to follow this one for a bit, one can look at the current institutional ed tech landscape as a maturing landscape where variety is diminishing and certain species becoming dominant. But far from reaching an ultimate stable climax, there are disruptors, the latest and possibly largest being the financial crisis. These disturbances open the opportunity for new species to flourish. But… unless we’re suggesting the disturbances are so large as to restart the entire succession process (which some indeed do suggest) we’re likely instead to see adaptations to this specific force, often in the form of seeking cheaper options.

So far, pretty conventional story – mature open source scoop some existing customers when the pricepoint gets too high. Except this is where I am seeing a real opportunity for the next generation approach to creep in (I’m pretty much going to abandon the metaphor here, as I’m no ecologist, that’s for sure.) Some of us have been enthused by the prospect of Loosely Coupled Gradebooks as a technology that can unseat the dominant, monolithic LMS. But to date, there have been only a few convincing examples, and it seems like a bit of a “can’t get there from here” problem (made worse by Blackboard’s predatory acquisition strategy.) Which is where the bridging strategy comes in – we need to take Moodle (and I guess Sakai though I am lot less keen on that prospect) and focus on isolating and improving its gradebook function; as it is, Moodle already represents a very viable alternative (as the increasing defections to it show), but as it is, it doesn’t represent a Next Step, nor will adopting it “as-is” move online learning in formal contexts further. But adopting it in combination with developing its gradebook functionality to ultimately become the hub for a loosely coupled set of tools. Maybe this isn’t that revelatory, but it became clear to me that a path forward for schools looking to leave not just Blackboard, but LMS/VLEs in general, goes through Moodle as it is transformed into something else. At least that seems doable to me, and something I hope to discuss with folks in BC as a strategy.

A new Network Literacy – Sharing Well

Throughout our walk, the second recurring theme was how, for both scholars and students, bloggers and wiki creators, open source software developers and crowdsourcers of many ilk, there is a real talent to sharing in such a way that it catalyzes further action, be it comments, remixes or code contributions.

Howard Rheingold uses the term “Collaboration literacy” as one of the 5 new network literacies he proposes, and I guess, barring any other contender, that it’s not a bad term, but it does strike me that there is a real (and teachable) skill here, one that many of us have experienced; either in the “lazyweb” tweet that is so ill-conceived that it generates no responses at all, or often in envy marvelling at bloggers who manage to generate deep discussion on what seems like the barest of posts, yet one which clearly strikes the right note. “Shareability”? Ugh, right, maybe leave it alone, I mean do we really need another neologism? Still, it does seem worthy of note as a discrete skill that people can increasingly cultivate in our networked, mash-up world.

OLNet Fellowship Week 2 – Initial Thoughts on Tracking Downloaded OERs

As I mentioned when I first posted that I was coming to the UK for this fellowship, my main focus is how to generate some data on OER usage after it has been downloaded from a repository. In looking at the issue, it became clear that the primary mechanism to do so is actually the same as to track content use for sites themselves, by using a “web bug” in the same sort of way that many web analytics apps do, but instead of the tracking code being inserted into the repository software/site itself, it needs to be inserted into each piece of content. The trick then becomes

  • how do we get authors to insert these as part of their regular workflow
  • how do we make sure they are all unique / at what level do they need to be unique
  • how do we easily give the tracking data back to the authors.

My goal was to do all this without really altering the current workflow in SOL*R nor requiring any additional user accounts.

The solution I’ve struck upon (in conversation with folks here at the OU) is to use pwiki an open source analytics package with an extensive API to do the majority of the work, and to then work on how to insert this into the existing SOL*R workflow. So the scenario looks like this:

1a. Content owners are encouraged (as we do now) to use the BC Commons license generator to insert a license tag into their content. As part of the revised license generator, we insert an additional question – “Do you wish to enable tracking for this resource?”

1b. If they answer yes, the license code is ammended with a small html comment –

<!–insert tracking code here–>

1c. The content owner then pastes the license code and tracking placeholder into their content as they normally would. We let them know that the more places they place it into their content, the more detailed the tracking data will be. We also can note that this is *only* for web-based (e.g. html) content.

2. The content owner then uploads the finished product as they normally would.

3a. Each night a script (that I am writing now) runs on the server. It goes through the filesystem, and every time it finds the tracking placeholder:

  • based on the files location in the filesystem, it deconstructs the UUID assigned it in SOL*R
  • uses the UUID to get the resource name from SOL*R through the Equella web services
  • re-constructs the resource home url from its UUID
  • sends both of these to the piwik web service, which in return creates a new tracking site as well as the javascript to insert in the resource
  • finally writes this javascript where the tracking placeholder was.

4a. Finally, in modifying the SOL*R records, we also include a link to the new tracking results for each record that has it enabled.

4b. For tracking data the main things we will get is:

  • what are the new servers this content lives on
  • how many time each page of content in the resource (depending on how extensively they have pasted the tracking code) has been viewed, both total and unique views
  • other details about the end users of the content, for instance their location and other client details

I ran a test last week. This resource has a tracking code in it.  The “stock” reports for this resource are at http://u.nu/3q66d It should be noted: we are fully able to customize a dashboard that only shows *useful* reports (without all the cruft) as well as potentially incorporate the data from inside Equella on resource views / license acceptances. This is one of the HUGE benefits of using the SOL*R UUID in the tracking is that it is consistent both inside and outside of SOL*R.

I am pretty happy with how this is working so far; while I have expressed numerous times that I think the repository model is flawed for a host of reasons, to the extent to which it can be improved, this starts to provide content owners (and funders) details on how often resources are being used after they are downloaded, and (much like links and trackbacks in blogs) offer content owners a way to follow up with re-users, to start conversations that are currently absent.

But… I can hear the objections already. Some are easy to deal with: we plan to implement this in such a way that it will not be totally dependent on javascript. Others are much more sticky – does this infringe on the idea of “openness”? What level of disclosure is required? (This last especially given that potentially 2nd and 3rd generation re-users will be sending data back to the original server if the license retains intact.)

I do want to respect these concerns, but at the same time, I wonder how valid they are. You are reading this content right now, and it has a number of “web bugs” inserted in it to track usage yet is shared under a license that permits reuse. Even if it is seen as a “cost,” it seems like a small one to pay, with a large potential benefit in terms of reinforcing the motivations of people who have shared. But what do you think – setting aside for a second arguments about “what is OER?” and “the content’s not important,” does this seem like a problem to you? Would you be less likely to use content like this if you knew it sent usage data back? Would anonymizing the data (something piwik can easily do) ease your mind about this?

OLNet Fellowship – Week 1 Highlights

At the rate it seems to be going, my month here in Milton Keynes will be over in the blink of an eye, but my first week is coming to a close and I wanted to reflect on some of the things I’ve learned and experienced so far.

Community and Open Education

Two examples I came across my second day here really spoke to me about new ways of thinking about OER/Open Education in relationship to people and communities. The first is the iSpot project managed by Doug Clow, one of my colleagues here in the Institute of Educational Technology where the OLNet team from the OU is housed.

As Doug explained, the site allows people to post photos of they’ve taken of local species, and crowdsources their identification. The site has a sophisticated reputation system that awards participants and also identifies those with formal expertise in different fields and weighs their input accordingly. The OU have partnered with a number of BBC Television nature shows and radio programmes to popularize the site, so they are attracting an audience who then participate out of and existing passion and interest. The genius is To *then* weave OU courses into/around this community site and content, using it both as potential course content but also as a conduit for interested informal learners to find formal learning opportunities if they chose, and also interact and be supported in their informal learning community by discipline experts. When Doug described this to me my jaw dropped; it is so obvious yet really a brilliant turn. Too often in formal higher ed we have had the “build it and they will come” belief about our OER efforts, and when that hasn’t happened we’ve then shifted our focus to “building communities” around our content. But that is so wrongheaded. Communities exist already, and where they don’t, it’s not simply a matter of them forming around content, per se. By leading with a site that helped users scratch an itch they already had, however small, (“I keep spotting this bird in my back yard but I don’t know what it is”) and then building tools to support peer engagement and discussion, as well as personal identity and reputation, they’ve set the stage for community to form and share knowledge and only THEN weave formal offerings in and around this. It’s probably not perfect, but I think it offers strong suggestions as to how institutions can engage civil society in a way that leads to a permeable boundary between existing informal learning communities and formal learning institutions/scholars.

The second example was a bit different yet still inspiring. Another researcher on the OLNet project, Andreia Santos, gave a short talk on an initiative at the Brazilian university Unisul to experiment with ways to attract new learners through a mixture of Open Education, peer support and social networking. If I understood correctly (and I’m not sure I completely did, so I hope Andreia will see this and chime in with a correction or pointer to a longer write up), the university has begun offering access to a block of 10 courses, a mixture of open resources from the OU and themselves, within their own learning environment (so not just ‘content’ but a full VLE experience…). The part that tickled my fancy was that they do so during one of their “breaks” (in their case the Winter break that happens in June/July) and are in part marketing it to friends and families of existing students. This seems like a smart idea in that not only do they have stronger ties and so their message is much more convincing, but they themselves end up taking some of these courses to and because of their familiarity with the environment end up becoming a form of peer support. I understand that this year they have introduced a nominal fee but that students can take as many of the courses as they want and get a form of certificate at the end. Like I said, different than iSpot but still I think a strong example of interacting with community and existing ‘social networks.’

Repositories – some mothers do ‘ave ’em

Another part of my experience so far has been to listen to talks on a few different repository projects that shall remain nameless. The learning here wasn’t particularly new for me, but it did continue to confirm beliefs I’ve long held about the weak points of this approach: that they typically do not tap in or reinforce individual motivations for sharing; that their model of ripping content out of its original context for download goes against the grain of the web (more on this soon, as part of my Fellowship work on “OER Tracking”); and that they are a solution begged by the questions of VLEs/LMS silos, sharing modeled on “publishing” and that is ony half-heartedly committed to sharing. But… the one good thing I guess is that it made me feel slightly better about my own work, that I’m not the only one who’d hit these problems nor had to learn the hard way that content doesn’t build networks that share, people do.

On being at the OU

If I haven’t already made it clear, it is a HUGE honour for me to be a visiting academic with the OU through the OLNet Fellowship program. This institution has been (and still is) a global leader in the field of distance learning and open education, and there is a tangible passion here for the belief that education can radically improve people’s lives for the better. The opportunity to be physically here for a month is even more special to me because on a day to day basis I work from my home office, and while I am surrounded by a global network of peers who I talk with daily, the chance to be surrounded by so many smart people passionate about open learning, as well as have access to some fantastic services on this lovely campus is one I will never forget. I’d be remiss if I did not extend a special thanks to Karen Cropper and Janet Dyson for helping me find my way in the first few days and make me feel really at home, and a special thanks to “Liam and the librarians” for broadening my social horizons.

There’s lots more to tell, especially around my specific project of tracking OERs outside of the bounds of the repository (which I think we’ve now got a plausible model of how to do) but I’ll leave that for another post. For now I’ll leave it that it is good to be back in the land of great cheese and delicious warm beer with so many rich opportunities to learn ahead of me.

Look out Milton Keynes, here I come! – My OLNet Fellowship on tracking OER Reuse

http://olnet.org/

I’m still not 100% clear on whether I can tell anybody about this, but… too late now. Earlier this year I took a bit of a flyer and submitted an application for an OLNet Fellowship, which offered the chance to work with the folks at the renowned Open University in the UK on issues around Open Education. I am not a full-time Academic and don’t have an enormous publication record, but I’d like to think I’ve paid some dues in the trenches working on, and thinking and writing about, Open Education. Apparently so did they, because much to my pleasant surprise I was awarded an “Expert Fellowship,” a category seemingly designed to suit odd-balls like myself that work in the lofty heights of Academia but ain’t got no papers 😉

But there’s a point to this post apart from saying “wohoo Scott” (wohoo!) Actually 2 points. The first is a shout out to colleagues in the UK that I will be in Milton Keynes from June 23 until July 24th. I am not clear yet the extent of my mobility will be, but I’m certainly hoping that the month offers some opportunities to visit and learn with colleagues in the UK. If you are interested, please do let me know and we’ll try to make it happen.

The second point of the post is to share a bit of what I am going to be working on. As many of you know, I run an “open educational resource” repository (cue loud groan.) In our model, and it seems far from unique, teaching resources aimed primarily at instructors are typically downloaded and reused in some other context. While it is possible to ‘point’ to content hosted in our system, in most cases this is not how it is used.

One of the problems with this model (and sheesh, don’t I wish there were only one) is that the content owners don’t get a good sense of the popularity of their resources and where else they are being used. As a blogger and long time creator of web content that has been reused, I know that getting feedback on how often your stuff is viewed and from where, whether it be in the form of Trackbacks, or services like Google Analytics, can be a big shot in the arm. Sure, it is hopefully not the only thing that motivates you, but it doesn’t hurt.

So my proposal is to research the myriad different ways this kind of usage tracking can be implemented specifically in the context of OER (with a high sensitivity to finding approaches conducive to freedom and not any sense of ‘restriction’), select one and implement it in my real world repository. It is a big fish to fry and I do not think the problem is exclusive to OERs but in general applies to digital media. While I do hope to report on general approaches I also know that having a specific context to work in will be helpful. So expect to hear more (and get more pleas of “help!”) in the coming months.

Anyways, hope I do end up getting to meet some of you conspirators who ’til now have been just URLs or avatars. And I hear the English countryside is lovely that time of year… – SWL