cc licensed ( BY NC ND ) flickr photo shared by Andrew Barclay

Maybe y’all have shrugged and dropped back into Facebook timelines, Google Plus Circles, or Twitter chats. I am not letting go of being Pissed Off at Google for dropping the depth charges on Google Reader. So made I cannot even spell annihilation

Google’s claim is they “Do No Evil” — but what constitutes evil just be in the eye of the beholder. They could have frozen reader, or just made it possible not to add to it– but Google has completely deleted it from the fabric of the web, leaving a gaping hole. To someone who loves data, destroying it might just be… Evil-ish.

I was fortunate ro have taken the steps to do a complete download of my Google Reader archive before the Google dropped their Non Evil Bomb of Data Destruction using the Reader is Dead toolset.

Just for comparison, the Takeout archive I ran from Google itself was about 180 Mb of data. the one from Reader is Dead was 10 Gb.

And you though Nixon had a big gap with 18 minutes of missing tape.

What I have essentially is a copy of the data of every site I ever read in Google Reader, since December 1, 2006 — in fact, I found my blog post from that first date– Google Reader I am in Love.

Google had tracked everything I had Read in Reader, Shared in Reader, and Favorited in Reader– and they just tossed that in the trash.

Dead Data.

If that not Evil, then it is Mean, Childish, and at least Fucking Rude for Google to just delete.

But now I at least can access to it via that archive and the newer Zombie Reader tool– this is sheer brilliance, and at least gives us a bit of a way to say FUCK YOU GOOGLE.

Having gotten all my data out of Google Reader, the next step was to do something with it. I wrote a simple tool to dump data given an item ID, which let me do spot checks that the archived data was complete. A more complete browsing UI was needed, but this proved to be slow going. It’s not a hard task per se, but the idea of re-implementing something that I worked on for 5 years didn’t seem that appealing.

It then occurred to me that Reader is a canonical single page application: once the initial HTML, JavaScript, CSS, etc. payload is delivered, all other data is loaded via relatively straightforward HTTP calls that return JSON (this made adding basic offline support relatively easy back in 2007). Therefore if I served the archived data in the same JSON format, then I should be able to browse it using Reader’s own JavaScript and CSS. Thankfully this all occurred to me the day before the Reader shutdown, thus I had a chance to save a copy of Reader’s JavaScript, CSS, images, and basic HTML scaffolding.

The Zombie Reader Tools brings back my entire Google Reader history! Wow Audrey Watters, I sure hope you grabbed your archiv, because in much less time, you had read 4 times as many things as I had

Here is my Zombie Reader that I as able to load into a web browser and use:

(click image for full size)

(click image for full size)

Google had axed the sharing features back in 2011, but this tool is still able to yield some 50,000 items people in my network had shared in their reader use:

(click image for full size)

(click image for full size)

I can move through my feeds and folders, back in time, starting with the last bits of things I read in my edtech feeds at the end if June 2013:

(click image to see full size)

(click image to see full size)

The tool lets me go to the beginning of time, the first feeds I read in November 2006, from none other at Gardner Campbell- the source is something from a Regional NMC conference I had set up as a early feed in exploring Reader:

first read

or a classic Kotke in a dated way doing something recursive with another old school tool

delciosu kotke

And an older Abject post from Brian Lamb, he was at the races…

reading abject

The Zombie Archive has the records of things I read from blogs that do not exist anymore.

Maybe one limit I can see now is I cannot search the archive (that would take a lot of muscle to do in Javascript), but I am assured at least knowing that a giant store of data is there (it is all XML file .json data files).

If anything, this is sobering to realize how much data Google was storing just from reader activity- 10 Gb for one persons habits over 7 years. This was made possible to extract using public APIs and tools brilliantly written by Mihai Parparita— and keep in mind, this is much more data that Google itself was releasing in its own Takeout tool (of course holding back my data is not Evil) (nor is throwing it away).

And even large question is- did Google really delete all this data or did they just take it offline?


cc licensed ( BY NC ND ) flickr photo shared by Pete Morawski

Thanks again Mihai! At least this Dog is a Walking Zombie, and able to at least walk around knowing I have my pile of data.

If this kind of stuff has value, please support me by tossing a one time PayPal kibble or monthly on Patreon
Become a patron at Patreon!
Profile Picture for CogDog The Blog
An early 90s builder of web stuff and blogging Alan Levine barks at CogDogBlog.com on web storytelling (#ds106 #4life), photography, bending WordPress, and serendipity in the infinite internet river. He thinks it's weird to write about himself in the third person. And he is 100% into the Fediverse (or tells himself so) Tooting as @cogdog@cosocial.ca

Comments

Leave a Reply to Tim Owens Cancel reply

Your email address will not be published. Required fields are marked *