Amazon Rekognition enhanced capabilities

Amazon Machine Learning Blog announces some very significant improvements to the Amazon Rekognition – their service for image and video analysis.  These particular changes are improving the face recognition technology.

“Face detection” tries to answer the question: Is there a face in this picture? In real-world images, various aspects can have an impact on a system’s ability to detect faces with high accuracy. These aspects might include pose variations caused by head movement and/or camera movements, occlusion due to foreground or background objects (such as faces covered by hats, hair, or hands of another person in the foreground), illumination variations (such as low contrast and shadows), bright lighting that leads to washed out faces, low quality and resolution that leads to noisy and blurry faces, and distortion from cameras and lenses themselves. These issues manifest as missed detections (a face not detected) or false detections (an image region detected as a face even when there is no face). For example, on social media different poses, camera filters, lighting, and occlusions (such as a photo bomb) are common. For financial services customers, verification of customer identity as a part of multi-factor authentication and fraud prevention workflows involves matching a high resolution selfie (a face image) with a lower resolution, small, and often blurry image of face on a photo identity document (such as a passport or driving license). Also, many customers have to detect and recognize faces of low contrast from images where the camera is pointing at a bright light.

With the latest updates, Amazon Rekognition can now detect 40 percent more faces – that would have been previously missed – in images that have some of the most challenging conditions described earlier. At the same time, the rate of false detections is reduced by 50 percent. This means that customers such as social media apps can get consistent and reliable detections (fewer misses, fewer false detections) with higher confidence, allowing them to deliver better customer experiences in use cases like automated profile photo review. In addition, face recognition now returns 30 percent more correct ‘best’ matches (the most similar face) compared to our previous model when searching against a large collection of faces. This enables customers to obtain better search results in applications like fraud prevention. Face matches now also have more consistent similarity scores across varying lighting, pose, and appearance, allowing customers to use higher confidence thresholds, avoid false matches, and reduce human review in applications such as identity verification. As always, for use cases involving civil liberties or customer sentiments, where the veracity of the match is critical, we recommend that customers use best practices, higher confidence level (at least 99%), and always include human review.

Have a look at their blog post for some examples of what the machine can recognize as a face.  Some of these are difficult enough to treat many humans, I think.

Amazon Rekognition – Image Detection and Recognition Powered by Deep Learning

I know, I know, this blog is turning into an Amazon marketing blow-horn, but what can I do? Amazon re:Invent 2016 conference turned into an exciting stream of news for the regular Joe, like yours truly.

This time, Amazon Rekognition is announced, which is an image detection and recognition service, powered by deep learning.  This is yet another area traditionally difficult for the computers.

Like with the other Amazon AWS services, I was eager to try it out.  So I grabbed a few images from my Instagram stream, and uploaded them into the Rekognition Console.  I don’t think Rekognition actually uses Instagram to learn about the tags and such (but it is possible).  Just to make it a bit more difficult for them, I’ve used the generic image names like q1.jpg, q2.jpg, etc.

Here are the results.  Firstly, the burger.

rekognition-burger

This was spot on, with burger, food, and seasoning identified as labels.  The confidence for burger and food was almost 99%, which is correct.

Then, the beer can with a laptop in the background.

rekognition-beer

Can and tin labels are at 98% confidence. Beverage, drink, computer and electronics are at 69%, which is not bad at all.

Then I decided to try something with people.  Here goes my son Maxim, in a very grainy, low-light picture.

rekognition-maxim

People, person, human at 99%, which is correct.  Portrait and selfie at 58%, which is accurate enough.  And then female at 53%, which is not exactly the case.  But with him being still a kid, that’s not too terrible.

Let’s see what it thinks of me then.

rekognition-leonid

Human, people, person at 99% – yup. 98% for beard and hair is not bad.  But it completely missed out on the duck! :)  I guess it returns a limited number of labels, and while the duck is pretty obvious, the size of it, compared to how much of the picture is occupied by my ugly mug, is insignificant.

Overall, these are quite good results.  This blog post covers a few other cases, like figuring out the breed of a dog and emotional state of people in the picture, which is even cooler, than my tests.

Pricing-wise, I think the service is quite affordable as well:

rekognition-pricing

$1 USD per 1,000 images is very reasonable.  The traditional Free Tier allows for 5,000 images per month.  And API calls that support more than 1 image per call, are still counted as a single image.

All I need now is a project where I can apply this awesomeness…