Finding Learning Objects – Walking the Talk

Can our intrepid search find a learning object in time to figure out how to calculate ‘Z-scores’? Tune in and find out!

Today (like many days) I was faced with a task I was not 100% sure how to do. I had a set of ratings for different evaluators, and had been told by someone who knew better than I that I should be trying to calculate their ‘z-scores’ in order to standardize the numbers.

As I was about to enter a handy-dandy Google search, I thought – “no wait! Why don’t you see if there are any ‘learning objects’ out there that could teach you what a Z-score is, and how to calculate it.” So I set out in search of my closest learning object repository to see what I could find.

Off I went, partly with a real goal of learning about Z-scores, and partly as a bit of an experiment to see what existing repositories had in store for me.

My first stop was the granddaddy (grandma?) of them all – MERLOT. Maybe not a ‘repository’ per se, but it’s been around for a while and would certainly have some good pointers. And sure enough, a search for the term ‘z-score’ brought back one result which seemed promising. The resource that was pointed to was certainly about much more than simple Z-scores, but sure enough it did have a page dedicated to that topic. So while there was a bit of deft navigation involved after the fact to locate the actual material concerning Z-scores, at least something relevant came back, which was encouraging. Still, it’s a good thing I didn’t search MERLOT for ‘z-scores(zero results) or ‘z score (without the dash)’ (26 results, only one of which appears relevant). And better yet that I didn’t try the federated search – which wouldn’t have found the one relevant MERLOT entry, but would have returned 10 entries from SMETE seemingly all related to astrophysics!

But in the spirit of furthering this line of enquiry, I thought I’d try out a few other repositories. Next was one closer to home – CAREO. CAREO’s also been around a while, and has learnt a lot about how repositories should work.

Sure enough, a search on ‘z-score’ does get a hit, in fact 5 hits. Problem being that they are all the same resource!

So that it’s clear, I do not mean to pick on these particular repositories. Direct searches on almost all of the repositories listed here fared about as well (one exception was this example, that provided 2 flash applets with far to much info on the topic).

As a final thought (I still needed to actually calculate some z-scores) I did the Google search I had originally meant to do. There in the first result was the specific page in the online course that MERLOT had pointed to, along with 10 other pertinent resources (some relating to Z-scores, some to ‘Z score’ and others to ‘Z scores’). My favourite turned out to be the last one on the first page of Google results – all I really wanted to know was that I needed to subtract the mean of the values from my value and divide it by the standard deviation, and that z-scores “measure the distance from the mean of a distribution normalized by the standard deviation.”

What’s to be learnt from all of this (besides the fact that I’m a lousy statistician and verbose to boot!) I’ll let you draw your own conclusions. Was my ‘study’ a fair and accurate one? Not at all. Did it mimic how real people might set about finding a learning resource? It doesn’t seem like an unreasonable scenario. Should I have expected more resources, or an easier time locating what I actually wanted? Well, if your standard is Google, which may not be entirely fair, I’d say that we have a fair ways to go. – SWL

12 thoughts on “Finding Learning Objects – Walking the Talk”

  1. And, yeah… Your verbose description does point out the need for more flexible searching 😉

    Stemming (looking for “score”, “scores”, “scored”, “scoring”, … simultaneously) would be a Good Thing. As would real substring searches (search for “stat” and get all items on statistics…)

  2. Hey D’Arcy, thanks for providing the correct link. I’ve seen how Apollo handles duplicate records and think it looks great. Hope it didn’t seem like I was picking on CAREO, I really wasn’t meaning to, it was just the next one that came to mind as I went through my mental list of places to hunt for learning objects. Cheers, Scott.

  3. Hey, did you notice where the last resource on the first page of google hits really came from? If this was your last entry http://www.sysurvey.com/tips/statistics/zscore.htmthen it looks very much like the first one pointing to Merlot, as both are from the same source Dr Howard Hoffman of the Animated Software Company.

    Ah, Learning objects and Walking the talk in circles…

    😉

  4. Yah, I just tried some similar z-score related searches using the Edusource federated search page and came up with a significant list of items, most of which are off topic. Simple searches, which most people tend to use, don’t often return useful results. I have taken the time to learn how to use the advanced features of Google to refine my search queries. My understanding is that the EduSource federated search, as is, uses the search functionality of each of the linked repositories and is not a direct pipe to information stored within the metadata records. For example, a federated search for Law will not show any results from CAREO since CAREO restricts search terms to four characters or more. I believe that the ECL will overcome this shortcoming.

    There are some demonstrable examples of the power of repositories at this point. These are related to building dynamic resource lists based on RSS feeds or predefined search queries a la the SciQ Zones and some CLN theme pages. However, to make these effective across repositories, we will have to start building communities of practice using common taxonomies and vocabularies.

  5. I think Gerry makes a very good point. I am still working at operationalizing the ADLIB repository (http://adlib.athabascau.ca/) but it is becoming very obvious as we test it that our metadata still isn’t precise enough to satisfy most users or perhaps the search techniques aren’t fuzzy enough to find the object based on the very imprecise language used by searchers. A defined, controlled vocabulary might be an answer. It is very telling when I can’t find objects I put in myself without a good deal of manual searching through a number of records.

  6. I would nearly always make the big G my first reach (it is now my primary quick spelling checker), but loved reading of your efforts. These are the sorts of exercises that the LOR builders ought to be doing.

    Searching = just as much art as science. Maybe more. I would suggest it is a myth that we can create and utilize these precise search tools that people can rely on without human (self) filtering or tinkering. Searching is as much knowing *where* to search as well as *how*, and I would never be suprised if a well honed Google search will always out “find” others.

    Repositories ought to be about more, much more, than seek and find (and tag and tag and tag). No matter how refined or controlled the search logic is, we need to promote and build in the support for critical tihnking and analysis, so that our future users to not think the search results are the final answer.

    So what is a z-score? 😉

  7. I wonder if a learning object repository is meant to function in the way you attempted. That is as a reference tool. It seems to me you were looking for some specific information about a topic you already knew enough about. Is learning about reminding? or performance support? You were not looking to “learn” from the results of your search.

    I like your observations as I think they speak to the need to communicate what a LOR might be good for. And that we need to do some more work to figure out if they actually can support learning vs. lets say reference tools or performance support.

  8. I beg to differ— the author wanted to learn how to calculate a Z-score (or “about Z-scoring”) which seems like a perfectly reasonable learning exercise to me. There was a clear objective, context, and desired performance outcome.

    It seems that many (perhaps most) searches are fruitless, in my experience, as the LO hype has far surpassed the available content, and out of every 10 items how many are actually useful? Hopefully things will start getting better soon!

  9. My point about learning is subtle. I will contend that if somebody would like to know how to define z-score or get a formula for calculating they are looking for reference information – performance support. One might argue that this is a form of learning, but I will argue that it is not what most intend when they create a learning object.

    Scott states all he wanted to know was “that I needed to subtract the mean of the values from my value and divide it by the standard deviation, and that z-scores measure the distance from the mean of a distribution normalized by the standard deviation.”

    He did not seem to need pre-requisite information, examples, non-examples, activities, simulations, or other supports such as expert advice, collaboration, practice or even explanations towards the context of use. These are the types of things I would expect most people are putting in learning object repositories – right or wrong. I do see performance support, reference, and other information sources as different and important. I will re-state that I don’t believe the author was looking to learn or even re-learn about z-scores based on his description.

    To the point there may have been a task identified and one consistent with blooms taxonomy, but conditions and degree are absent and I suggest that this is something to consider when using learning objects as reference tools. However, that does not mean that a learning object repository search shouldn’t help provide this type of “information” as well or google, which ever you prefer.

Comments are closed.