By rule, I usually avoid use of the "R-word" (repository, too close to the "S-word"), but wanted to launch, here just a few notches into a new calendar, my pessimism on the aspirations of those creating these magical collections of "learning objects." The folly is that educators will give up some time to share information about resources they have created or used. They pay lip service to the concept but the action is not there. A bigger folly is that they would have the gumption to complete a "meta-data" form on top of that.
I am more convinced is that the loop is far from closed as we lack anything that can easily build meaningful things from these R-places. We have piles of meta-data on top of objects... and that is about all.
But following the pessimism is maybe a small ray of sunshine (next post).
This is fueled largely by the lack of response (or glacial speed thereof) of contributions to our Maricopa Learning eXchange (MLX)
. Back in October I outlined a rather long list of the various efforts and strategies
we have put in place to convince our folks to help build the MLX.
- Making the system friendly and easy to use- I have been able to convince some audiences that creating an MLX entry takes no more time or technical skill than composing a 3 paragraph email and maybe pasting in a URL or attaching a document. In my demos, I ask someone who has never been in the MLX to create and account and then walk them through creating their first item.
- Remind and remind often and Take it to the people. Just in the last 5 months, our MLX "PR" department has been active, with at least 4 conference presentations/posters, 4 college-based workshops, 3 group meetings, we have respectfully pummeled with light hearted system wide email messages, and mention MLX in just about every meeting. One more than one occaision I have been in meetings where one party talks about a desire for a better forum to share resources. At least a small improvement is that it is usually some of my colleagues who are quick to mention, "We already have that, it is the MLX."
- Bribery and Competition. We are in the middle of our third Great MLX Package Race" where we track the contribututions between October 1 and March 31, and will award to the colleges that contribute the most items to the MLX, a choice of 25 copies of Adobe Acrobat or Premiere, or 5 copies of Macromedia Studio MX (we are able to purchase these at a discount from our New Media Consortiummembership). Now we had thought technology committees, deans, centers for teaching and learning, would jump at the possibility of getting software worth $3-10K. On top of that, with some begging and pleading to vendors, we have prizes for individual efforts of two copies of Macromedia Studio MX2004 and 2 of AnyStream Apreso. All someone has to do to scoop these up is to rummage around their computer, and find those powerpoints, the lesson activities, the URLs for their class web sites, etc and post them to the MLX. They already have the items!
- Build Value for Groups into the System We got into the realm of RSS and syndication first to use that technology to be able to "fracshise" out specific slices of MLX information so that colleges, individuals, departments could have a dynamic information feed into their web sites, and have a decent variety of different feeds currently available. We set up permanent URLs (and RSS) that display an individual's contributions, their own personal MLX collection, suitable for use as a link from a home page or an email to their department chair. We set up special collections so projects could develop a specific tagged group of MLX items that would be accessible, and always up to date, from one URL (or RSS).
So we are still far from that tipping point for the MLX, and the huge pile of resources at MERLOT
is looking like that distant crater rim as viewed by a piece of dust on Mars.
I know that it just takes more time, but in that waiting, I continue to see prolific wheel re-invention and squandering of our collective intellectual capital. So you can take that repository and.... well I will stop.
blogged January 8, 2004 02:04 PM
:: category [
I think there are two main problems here. The first, as D'Arcy Norman points out, is the need for quality content :- why does a site like http://www.codeproject.com have hundreds of submissions and users? Is it because it is seen as status to have something published in codeproject? Or is because many people use it as the code snippets in it are invaluable on a daily basis? Content, as always is the main reason why I would use something, and the more people use a resource the more likely I am to contribute to it. Obvious? Probably.
My second point is that typing in metadata has to be the most tedious thing in the world. What is needed is software that is adpated to the process of uploading chunks of content as metadata eg if codeproject or other sites which publish either tutorials or discussions had a button whereby I could upload the page as a LO and it would extract the metadata out of the actual page with perhaps only one or two fields that had to be filled in by hand. I'd be more inclined to add data to the project. Maybe something working along the lines of http://www.educate.za.net/archives/00000025.htm or maybe even a toolbar that would allow me to do that.
I would not argue against the need for a collection to have quality content to make it more likely to have people contribute- it is the very crux behind the naming of ours as a "learning exchange" meaning you take things out, you put things in (a "repository" connotes a one way movement). But what is the point at which it gets there? It took MERLOT years to get to that point.
I would not argue that there is a lack of examples of thriving communities that contribute and use stuff from "repository"-like places such as the CodeProject site you sent. However, there is a critical difference between collections of re-usable computer code and collections of re-usable learning content-- the people that use and contribute computer code are a technically inclined audience, ones that you would expect can deal with things not working exactly as advertised, ones who are inclined to find end around solutions, ones rather self-reliant. Learning content is for a non-technical crowd, and the motivations and drives are vastly different.
Finally, as far as meta-data, I could not agree more as it as being a futile and negative inducement to sharing. From my viewpoint, there is no payback for me to enter meta-data-- exactly how does that information help fe find AND use more content? If it does play role, it is transparent, but there is a distrubing lack of tools and things for non-technical people to use to create content plucked from "repositories." Is there a push and click thing that makes interesting content from a pile of SCORM marked-up data? Can it do so for something that involves learning activities more complex that assembling airplane widgets?
I would be rather skeptical of something that supposedly determines meta-data by an automated examination of your content. Your mentions of " button whereby I could upload the page as a LO and it would extract the metadata out of the actual page" assumes that the content is a "page" (what about video, flash, a java applet??). What might help is an optional "wizard" (shuddering the thought of a Learnig Object Mr Clippy) that could ask the meta data queations in a more human format-- e.g. rather than an empty field to type in Discipilne-Level you would answer a question such as "For what subject areas might XXXXXX be used?" It still is tedious.
Until there is a good reason to add meta-data. I would guess very few would be inclined to do so.
Alan, You are probably right that in the end you have to ask 'why would anyone spend time sharing this information with someone else?' We are all too busy to take time out of our days to produce this sort of documentation, and as you point out, there aren't good reward structures in place, at least not for faculty. That said, we are, perhaps naively, building our own little Information Literacy Module repository (called LOLA, for learning objects, learning activities) at http://lola.wesleyan.edu . Our strategy, and I would be curious to hear what you think of such a strategy, is to have our librarians and academic technologists interview the faculty to create the documentation (aka metadata).
I very much appreciate the approach of LOLA (even if I cannot get the Kinks music out of my heard) as it provides a rich amount of information and context around the "objects." It would seem to take a bit of effort and time to get all this built (and it looks like hand spun web pages, though nicely done), so it is a balance of perhaps a smaller (in number) collection of very well-documented objects?
I would guess the question comes in finding out what the LOLA information provides to others that may want to re-use the objects. I see there are comments features, but wonder how else you get more sense of how the objects are deployed elsewhere.
I spent some time last fall doing some video interviews with faculty that had used MLX items contributed by other faculty and at least got some interesting anecdotal information (people finding resources first looking only in their discipline, but later finding useful ideas in other subject areas).
I enjoyed the collection, especially the turbidity current videos. I actually worked on some data collection for them back in the 80s as an undergrad and never had a concrete visual impression of what these phenomena were.