How we experimented with Amazon Rekognition’s custom labels

Amazon Rekognition is an out-of-the-box machine learning platform for image and video analysis. We recently published a blog post that explored its potential use cases, from retail and hospitality to transport and logistics.

We also delved into the depths of its many capabilities, including custom labels. Here’s an overview from AWS:

With Amazon Rekognition Custom Labels, you can identify the objects and scenes in images that are specific to your business needs. For example, you can find your logo in social media posts, identify your products on store shelves, classify machine parts in an assembly line, distinguish healthy and infected plants, or detect animated characters in videos.

In essence, you can use custom labels to train almost any dataset, and end up with a detection and recognition model for almost any application. 

To test the capabilities of custom labels further, the DiUS team decided to run some experiments with Amazon Rekognition. Here’s what we discovered…

Experiment #1 – Using custom labels to detect and count animals on a farm

Our first experiment was carried out by Machine Learning Consultant Shahin Namin. The purpose of his experiment was to find out:

  1. Whether custom labels can accurately detect the presence of cows from 12 birdseye images from two different farms captured by a drone. 
  2. Whether an enriched dataset featuring an additional 290 publicly available images of different farms would improve the outcome.

We used this dataset because the detection and counting of objects using computer vision is becoming increasingly popular across multiple applications. The use of images taken by drone was also a key decision, as we anticipate more applications making use of this data source in the future.

Observations and outcomes

Labelling the dataset

To annotate the data, we used the labelling tool to draw a bounding box around each object. However, this wasn’t the best user experience and took some time to complete – it takes time for each image to load, and you can only label 9 images at a time. If you need to label the dataset yourself, we recommend using a purpose-built solution first, such as Amazon SageMaker Ground Truth, before importing this data into Rekognition.

Quality and quantity of images makes a big difference

As expected, custom labels is not as accurate with a low number of images. Thankfully, this can be mitigated by enriching the dataset. However we also observed that the clarity and quality of images can confuse the model. Things like shadows, contrast, brightness and backgrounds all have an impact on performance. 

The size and detail of each image also makes a difference

If there is a problem with an image or its label, custom labels will discard it without warning. In our case, images with more than 50 bounding boxes were removed from the training dataset. Overlapping bounding boxes are not properly handled either. Also, there is a restriction on the size (15 MB for object recognition, 4 MB for object detection) and dimensions of training/test data images.

Custom labels is its own ecosystem

For example, the trained model cannot be downloaded and used elsewhere, so you’re committed to using custom labels. The application is limited to image object recognition and detection too – there’s no support for semantic segmentation, videos etc.

Experiment #2 – Using custom labels for TimTams brand recognition

Another feature of custom labels is that it can be used for brand recognition, whether that’s for social media posts or supermarket shelves. So another one of our DiUS Consultants William Infante thought, ‘why not give it a go with an Australian favourite – TimTams?’

Detecting TimTams on social media

We tried the limits of custom labels using a small dataset (12 images: 9 training images, 3 test images) and even with this constraint, it was still able to detect the presence of TimTams in almost all the Twitter posts we used.

Images of TimTams from Twitter
Images of TimTams from Twitter

Custom labels service also managed to filter out posts that didn’t directly relate to the TimTam brand – the image below is one example that appeared on Twitter search because the post contained the text “TimTams”.

Imitation TimTams
Imitation TimTams

The quality of the labelling data and the choice of bounding boxes also mattered. For the TimTam use case, the bounding box that only contained the TimTam logo performed better than a bounding box for the whole pack of TimTams. Using custom labels to detect different TimTam flavours could cause confusion, especially with a limited training dataset.

Different bounding boxes
Different bounding boxes

And there are ways to improve logo detection for brand recognition. If possible, detection can improve with an increased training dataset sample in different lighting situations or by enriching the data. But if, for example, we are limited to using the same Rekognition model, we can still improve the data by playing around with the confidence level. However, we do have to be careful on how low or high we’d want our confidence level for all the images.

For some photos, too high confidence level can lead to some logos being discarded for brand recognition.

For some photos, too low of a confidence level can lead to more false positives.

Detecting TimTams on supermarket shelves

Once again, we had a small dataset (12 images) to work with on this experiment using the bounding boxes option. Here are our observations:

  • Custom labels can be used to detect the presence of a brand/logo in an image but cannot reliably count them with a database this small.
  • Labels were more easily detected when the logos fed to the model were not at an angle (see image below).
  • We encountered similar issues to the Shahin’s animal experiment – labelling took time and uploading images greater than 4096 pixels caused issues. 

Our thoughts on custom labels

Custom labels is a quick and codeless way to train an object detection and recognition model for your application. Not only does it enable you to easily experiment with data, you can also get a sense whether your data is of a sufficient quantity and quality. 

As a result, it doesn’t take long to implement your first benchmark, which is where we see a lot of value moving forward. For instances where custom labels doesn’t meet accuracy requirements, you can start exploring custom solutions that better serve business needs and objectives. 

For example, DiUS recently worked alongside insurtech bolttech to build a next-gen machine learning and computer vision experience. Using pioneering remote diagnostics technology, ‘Click-to-Protect’ quickly and easily onboards customers onto device protection plans. Customers simply hold their smartphone in front of a mirror and move through a sequence of tests, replacing a process that typically required a physical inspection.

The result is a zero-touch risk mitigation tool for bolttech and a best-in-class experience for customers. Click-to-Protect is currently performing at accuracy levels well in excess of targeted benchmarks. The model is continuously re-trained with libraries of images to improve the performance. 

To discover more about how Amazon Rekognition or custom ML models could meet your business needs, get in touch with DiUS.

Want to know more about how DiUS can help you?

Offices

Melbourne
Level 3, 31 Queen St Melbourne, Victoria, 3000

Phone: 03 9008 5400

Sydney
The Commons

32 York St Sydney,

New South Wales, 2000

DiUS wishes to acknowledge the Traditional Custodians of the lands on which we work and gather at both our Melbourne and Sydney offices. We pay respect to Elders past, present and emerging and celebrate the diversity of Aboriginal peoples and their ongoing cultures and connections to the lands and waters of Australia.

Subscribe to updates from DiUS

Sign up to receive the latest news, insights and event invites from DiUS straight into your inbox.

© 2024 DiUS®. All rights reserved.

Privacy  |  Terms