My iPhone excitement is nothing new here, given how long it took me to get one, but there is wave after wave of discovery of new things, I am forgetting I have only had it for 2 months.
But last night… I found an app that is, to me, explosive, in terms of opening potential for what a portable, networked, web connected media acquisition device can do. I am projecting the children of so-called “digital natives”, the ones that will make those natives seem foreign, will look back at our use of keyboard driven computers the same way I might look at a Victrola or a telegraph machine.
So, first a tip of the blog hat, a linktribution to David Warlick for sharing in his post, an iPhone app called SnapTell. It is a visual parallel of one of the other most amazing iApps, Shazam, which lets you hold the phone to an audio source, and it uses audio recognition to sample and identify the song and return information about the song back to your handheld.
SnapTell allows you to use the camera in the iPhone to take the picture of a book, CD< or DVD cover, and it uses image recognition to match it to a database, and return al of the relevant information from Amazon.com. SnapTell Explorer for the iPhone is described on their blog.
This free download, powered by SnapTell’s Snap.Send.Get image recognition technology, gives camera phone users instant information on virtually any book, movie DVD, video game or music CD.
We all know you can’t judge a book by its cover, and the same goes for films, music, and games. But fret not, because SnapTell’s new Mobile Movie Explorer can help make sure that you never have buyer’s remorse again — just snap the picture, send it to SnapTell, and you’ll get comprehensive information and reviews for the product you’re interested in right on your iPhone.
I’ve been playing a little with the QRCode readers like NeoReader on the iPhone, especially in Japan, where there where QR codes in the newspaper, on stores, on public signs, but its a challenge because you have to get a square on framed shot, it’s hard to get a good photos of the small ones. It takes too much effort.
So I gave SnapTell a quick test last night, taking a picture of a paperback that has been traveling with me, so its front page is curled and torn, and I took this photo under room light, what I would call non-optimal photo conditions- my photo I took with the app looked like:
So I took the image in the SnapTell app, clicked “use photo” and within 5 seconds it correctly identified it:
100% correct as Anything for Billy by Larry McMurtry. More than linking to Amazon, it ferrets a relevant link in Wikipedia to the author plus a link to a preset Google search (and Yahoo) for more information.
Now the SnapTell site reads with a lot of commercial-speak:
Founded in 2006, SnapTell is revolutionizing the way consumers and marketers connect. Using an everyday camera phone and SnapTell’s innovative image recognition technology, users can easily and instantly access requested information. Marketers can effortlessly create high-impact campaigns using existing collateral and can alter their messaging on the fly in response to SnapTell-provided actionable metrics.
SnapTell provides a highly customizable and integrated mobile marketing solution. With this Snap.Send.Getâ„¢ solution, marketers can deploy mobile marketing campaigns quickly and effectively. The SnapTell solution enables consumers to easily access marketing content and information on the go, driving brand awareness, conversion, loyalty and revenues. It is an end-to-end solution that gives marketers the ability to reach consumers and create a brand relationship with them ““ not just impressions..
blah blah. But under the Technology section it gets more interesting:
One of SnapTell’s patent pending proprietary innovations is a highly accurate and robust algorithm for image matching that we call ASG. Image matching is the problem of efficiently matching a query camera phone image against a database of images. Our technology offers unprecedented scale in detecting a matching image in a large database of images. Scaling of image matching is achieved using patent pending indexing techniques to organize all the features in any of a database of images for the purpose of efficient lookup. Our system also makes innovative use of distributed computing to achieve enormous scale.
Our technology works effectively on photos taken with almost all camera phones in the world wide market, including phones on the lower end of the market that have VGA cameras or relatively low resolution (640×480) cameras. Also, our matching server can handle photos taken in real life conditions that have a lot of issues including lighting artifacts, focus blur, motion blur, perspective distortion and incomplete overlap with the database image. Our technology works in a wide variety of real life scenarios including those of consumers taking photos of magazine print ads, outdoor billboards, posters, product packaging, branded cans, bottles and logos.
Another novel aspect of our technology is a patent pending innovation to automatically extract text embedded in camera phone images with unprecedented accuracy and use the extracted text to drive search. Text extraction is useful in scenarios in which the target image is not already registered in the database.
This is highly relevant for a new project we are now involved with at NMC, to develop the next iteration of the software alread in place at the Steve Project, for allowing museum visitors to tag artwork. The way it is done now is that tagging is done via images of art in a web browser. Part of the new project is developing mobile apps where people could tag art while at the museum– in fact, at a meeting last week, we sky dreamed af an app where we could use the camera in a mobile device to snap a photo of the art work, which could then be identified via image recognition in a database, so the web app could return to the mobile device a tagging interface to the now identified art work.
This is now not a dream- SnapTell does exactly this.
I am sure there are more things one can do, but consider the power of a mobile devices camera and network connection to identify or collect information from the field- be it museums, fossils, plants, exotic frogs– and identify it visually via a photo (like SnapTell) or even audio (like Shazam) by sound — e.g. bird identification by call?? And this is not even including the capabilities you get by connecting now media, information, and geolocation afforded by auto mapping via GPS.
This is finally getting close to the long promise of mobile learning technologies. I am again thinking of a conversation I had in Japan with a group of undergraduate students at Osaka Gakuin University when I asked them about the kinds fo technology they used most often for play, social use, or anything.
None of them mentioned a computer or a lap-top. They don’t have/own/use one because of the capabilities of phones (and networks, 3G is old news) in Japan. The computer is a foreign device to this group of young people, leapfrogging us, the keyboard generation, as they are technologically connected people– without a computer (this is ignoring what they will do when they get to a workplace that has those old machines).
I am excited about the magic of mobile technology as, if they way I peek into the foggy curtain of the future, portable networked devices make the connectivity, the information seeking, gathering more transparent. Snap a photo of something and identify it- magic. Put a mobile device near an audio source and identify it- magic.
Magic, as if magic, as if magic– the tag line on my iPhone email is- “Emailed as if Magic from my iPhone”
The post "Mobile, Media Recognition, Magic" was originally rescued from the bottom of a stangant pond at CogDogBlog (http://cogdogblog.com/2008/10/mobile-magic/) on October 19, 2008.