Tracking.js – different computer vision algorithms and techniques library for the browser.
Google Online Security Blog shares the news on the innovation in image recognition technology used in Google Street View:
Translating a street address to an exact location on a map is harder than it seems. To take on this challenge and make Google Maps even more useful, we’ve been working on a new system to help locate addresses even more accurately, using some of the technology from the Street View and reCAPTCHA teams.
This technology finds and reads street numbers in Street View, and correlates those numbers with existing addresses to pinpoint their exact location on Google Maps. We’ve described these findings in a scientific paper at the International Conference on Learning Representations (ICLR). In this paper, we show that this system is able to accurately detect and read difficult numbers in Street View with 90% accuracy.
Here are some examples of correctly identified street numbers – quite impressive!
What’s even more interesting that pushing this technology for good uses also empowers the evil side of things:
Turns out that this new algorithm can also be used to read CAPTCHA puzzles—we found that it can decipher the hardest distorted text puzzles from reCAPTCHA with over 99% accuracy.
Thanks to modern technology you can connect with your loved ones by sending a quick note, a photo of your cat, even a smile :) around the world in seconds. But one of humanity’s most iconic forms of communication—the kiss—has been left out in the cold. Now, though, you can send a kiss to anyone, anywhere in the world, through Burberry Kisses, a new campaign from Burberry and Google. And not just any kiss, but your kiss.
On one hand, this is sweet and romantic. Yet, on the other, Google is so well known for its crowd-sourcing experiments, that it makes me wonder – what’s behind this one? After all, when Google wanted to fix all those bad scans in Google Books project, they’ve started the Google Captcha service that used everyone on the web. When Google wanted to teach it’s voice recognition of all the accents (at least in the States), they’ve opened up a directory service. And they’ve done more of the same for images, artificial intelligence, and even maps.
So, what is a possible usage for a huge collection of lip images?
The darkest version I have is somewhere around fingerprinting. Lip prints are probably as unique as finger prints. And when you mix it up with, say, face recognition that they already have, who knows where that can lead. Oh, by the way, now that I thought of face recognition, Android’s face recognition lock sounds suspicious as well. Oh, crap. I think I’m going paranoid!
It looks like humans aren’t all that useless when it comes to technology. There are still a few areas that we do better than machines. Image recognition is one of them. TechCrunch runs the story about one company that seems to be using humans in image recognition process. Comments to that story also mention Google doing the same.
To me it feels like a problem with timing. There is a need to tag and search a whole lot of images. But there is no good automated solution available. So we are falling back on humans. It’s easy to come up with a few other areas, in which there is a need today for solutions which won’t even be here tomorrow. Technology needs help, I guess.