Amazon Rekognition – Image Detection and Recognition Powered by Deep Learning

I know, I know, this blog is turning into an Amazon marketing blow-horn, but what can I do? Amazon re:Invent 2016 conference turned into an exciting stream of news for the regular Joe, like yours truly.

This time, Amazon Rekognition is announced, which is an image detection and recognition service, powered by deep learning.  This is yet another area traditionally difficult for the computers.

Like with the other Amazon AWS services, I was eager to try it out.  So I grabbed a few images from my Instagram stream, and uploaded them into the Rekognition Console.  I don’t think Rekognition actually uses Instagram to learn about the tags and such (but it is possible).  Just to make it a bit more difficult for them, I’ve used the generic image names like q1.jpg, q2.jpg, etc.

Here are the results.  Firstly, the burger.

rekognition-burger

This was spot on, with burger, food, and seasoning identified as labels.  The confidence for burger and food was almost 99%, which is correct.

Then, the beer can with a laptop in the background.

rekognition-beer

Can and tin labels are at 98% confidence. Beverage, drink, computer and electronics are at 69%, which is not bad at all.

Then I decided to try something with people.  Here goes my son Maxim, in a very grainy, low-light picture.

rekognition-maxim

People, person, human at 99%, which is correct.  Portrait and selfie at 58%, which is accurate enough.  And then female at 53%, which is not exactly the case.  But with him being still a kid, that’s not too terrible.

Let’s see what it thinks of me then.

rekognition-leonid

Human, people, person at 99% – yup. 98% for beard and hair is not bad.  But it completely missed out on the duck! :)  I guess it returns a limited number of labels, and while the duck is pretty obvious, the size of it, compared to how much of the picture is occupied by my ugly mug, is insignificant.

Overall, these are quite good results.  This blog post covers a few other cases, like figuring out the breed of a dog and emotional state of people in the picture, which is even cooler, than my tests.

Pricing-wise, I think the service is quite affordable as well:

rekognition-pricing

$1 USD per 1,000 images is very reasonable.  The traditional Free Tier allows for 5,000 images per month.  And API calls that support more than 1 image per call, are still counted as a single image.

All I need now is a project where I can apply this awesomeness…

Tracking.js – different computer vision algorithms and techniques library for the browser

Tracking.js – different computer vision algorithms and techniques library for the browser.

Google Street View vs. captcha

Google Online Security Blog shares the news on the innovation in image recognition technology used in Google Street View:

Translating a street address to an exact location on a map is harder than it seems. To take on this challenge and make Google Maps even more useful, we’ve been working on a new system to help locate addresses even more accurately, using some of the technology from the Street View and reCAPTCHA teams.

This technology finds and reads street numbers in Street View, and correlates those numbers with existing addresses to pinpoint their exact location on Google Maps. We’ve described these findings in a scientific paper at the International Conference on Learning Representations (ICLR). In this paper, we show that this system is able to accurately detect and read difficult numbers in Street View with 90% accuracy.

Here are some examples of correctly identified street numbers – quite impressive!

street numbers

What’s even more interesting that pushing this technology for good uses also empowers the evil side of things:

Turns out that this new algorithm can also be used to read CAPTCHA puzzles—we found that it can decipher the hardest distorted text puzzles from reCAPTCHA with over 99% accuracy.

Oops!

Burberry Kisses, good or evil?

Here is something I have mixed feelings about:

Thanks to modern technology you can connect with your loved ones by sending a quick note, a photo of your cat, even a smile :) around the world in seconds. But one of humanity’s most iconic forms of communication—the kiss—has been left out in the cold. Now, though, you can send a kiss to anyone, anywhere in the world, through Burberry Kisses, a new campaign from Burberry and Google. And not just any kiss, but your kiss.

On one hand, this is sweet and romantic.  Yet, on the other, Google is so well known for its crowd-sourcing experiments, that it makes me wonder – what’s behind this one?  After all, when Google wanted to fix all those bad scans in Google Books project, they’ve started the Google Captcha service that used everyone on the web.  When Google wanted to teach it’s voice recognition of all the accents (at least in the States), they’ve opened up a directory service.   And they’ve done more of the same for images, artificial intelligence, and even maps.

So, what is a possible usage for a huge collection of lip images?

The darkest version I have is somewhere around fingerprinting.  Lip prints are probably as unique as finger prints.  And when you mix it up with, say, face recognition that they already have, who knows where that can lead.  Oh, by the way, now that I thought of face recognition, Android’s face recognition lock sounds suspicious as well.  Oh, crap.  I think I’m going paranoid!

Humans in image recognition

It looks like humans aren’t all that useless when it comes to technology.  There are still a few areas that we do better than machines.  Image recognition is one of them.  TechCrunch runs the story about one company that seems to be using humans in image recognition process.  Comments to that story also mention Google doing the same.

To me it feels like a problem with timing.  There is a need to tag and search a whole lot of images.  But there is no good automated solution available.  So we are falling back on humans.  It’s easy to come up with a few other areas, in which there is a need today for solutions which won’t even be here tomorrow.  Technology needs help, I guess.