Blog Pile

RSS of Newest Stuff: Syndicating With Blinders

Robin Good’s recent “Why RSS Search Feeds Based On Web Searches Are Important” was an interesting read for what it says about syndicating the results of web searches, but I had thought it would go down a different track. The quote that leaped out the page, from Steven M. Cohen was:

The point of RSS is to get updated with anything new to appear on a page (and more importantly, customized feeds crated by the user).

Is the point of RSS solely for “new stuff”? And once not new, content rolls away from ever being syndicated again? I think that it limited thinking. It certainly makes sense for where it works well, tapping quickly into the latest writings from blogs and news sources world wide. But there are places for perhaps other modes of syndication.

I went down this path when we first experimented with syndicating the content added to our Maricopa Learning eXchange (MLX), a warehouse of teaching ideas and materials from our college system. My original reason for syndicating was to provide a filtered service to our 10 colleges, so that they might display in their own web pages, a dynamic updated window into the things that had been contributed just from their site. Sure, the first feeds out of the gate were the newest items, MLX-wide and per college (example newest from Phoenix College). But if there were not a whole lot being contributed during a few weeks/months/(years!), well the “newest” feeds could be stale or it would only go 3 items deep.

So it was a natural evolution to set up feeds that were randomized, say instead of 5 newest items from Phoenix College, 5 random ones from all that had come from Phoenix College. See more on the MLX feeds and examples of where they are used (note a future update will provide all feeds as newest or random).

But (and this is where I thought Robin Good was going), all web searches, can add value by providing re-usable links and RSS feeds of search results. Google provides this as any query is provided a URL that can re-produce the search at any time. It was trivial to add this as a search feature to the MLX, so that any search result, say all packages from Chandler Gilbert related to “ice breakers” could be re-run at any time via a link provided on the search results and also an RSS feed generated dynamically.

It is trivial to add this functionality; it took me but a few hours to add it to the MLX, and I am a sloppy programmer. Ask for it from other search sites- syndicate and provide URLs that re-produce the results… Links alone can be added as a hypertext link to any wen page, so you can guide visitors to targeted results rather than telling them to go to site X and search on Y- e.g. Google goes farther than the MLX with http://www.google.com/search?q=ice+breaker+activity but maybe in addition to “I feel Lucky” how about “I feel random”?

Yes, the MLX is a small collection on the scale of Google ond MSN, and our results do not have the same dynamics of stability versus new content. But there are needs to use syndicated content fed to other sites that is not always “new”.

For example, I have been using flickr to drive all the images used on a web site I am doing for a non-profit organization (TBBL, to be blogged later)… Flickr provides a JavaScript feed we can use to syndicate and display newest or random images from all of our photos. However, I set up a series of tags for categories that could feed other pages, and the syndication provided is only for the newest photos with the tags where for the type of things we want to highlight, it would be better to be able to have an RSS feed that pulls a specified number of photos with a tag selected at random.

Might the same also be made of the new technorati tags? Is it only the newest that matter or might there be other organizing modalities to organize results?

I submit that there is absolutely nothing in the technology or standard of RSS that requires it only provide feeds of information ranked by newness. It is only the limits of our thinking. RSS > just what's new

Profile Picture for Alan Levine aka CogDog
An early 90s builder of the web and blogging Alan Levine barks at CogDogBlog.com on web storytelling (#ds106 #4life), photography, bending WordPress, and serendipity in the infinite internet river. He thinks it's weird to write about himself in the third person.

Comments

  1. Awesome, especially the final sentence. It’s about content syndication from one system to another.

    Yes you could create web services and interfaces to retrieve content the way you want from a system, but the good thing about RSS is that it’s not far away from a typical users desktop. It’s easy to learn and use.

    I’m planning to implement random, newest, and more specific rules for content retrieval through RSS feeds in my software programs. You never know how the users want to use the feeds.

    I’m personally planning to use feeds along with wikis in a way that I could exactly ask for a certain wiki page and it is delivered to another information system (imagine using your best wiki software as collaborative authoring tool for websites and a bare-bone PHP script to draw the different pages together. Content management in a very different realm compared to traditional CMS systems). i.e.:

    http://yoursite.com/wiki/feed/page+About_us.xml

  2. Pingback: XplanaZine

Comments are closed.