Improving data annotation with Superpixels

Improving data annotation with Superpixels

To understand the need for superpixels in segmentation we must first understand what is image segmentation. Image segmentation consists in detecting specific regions in an image. In concrete terms, this means detecting the shape of objects of different categories in images. Therefore, when segmenting an image, we give a class for each pixel of the image. As an example, we can consider the task of image segmentation for the categories Circle and Square.

In contrast to drawing bounding boxes or points, in image segmentation our goal is not only to know of the presence of given objects, but also to identify their positions, dimensions and precise shape. In the example below we segment a car using Kili’s interface:

As you might imagine it’s better for a machine learning model to receive as input a segmentation at pixel level than to receive bounding boxes or points. However, as you might also imagine, it’s much harder to generate freeform polygons around objects. Kili’s platform allows you to annotate using any of those.

When the annotation is harder and takes more time its price goes up. As a consequence, in industrial scales, semantic segmentation is only used when you need to annotate objects with very complex shapes. Bounding boxes, in most cases, offer the best compromise between complexity of the annotation and time taken. For this reason, bounding boxes usually allow you to obtain the best results with your money.

With the objective of lowering the cost of segmentation, we’ve integrated smart tools in our platform. These tools allow you to generate the same or even better segmentation masks with a fraction of the time. In this context, we currently offer two tools, interactive segmentation and superpixels. We’ve already a whole article based on interactive segmentation which is a deep learning based annotation aid method, you can access it in this link. In this article we’ll focus on superpixels.

What are superpixels ?

First we have to understand what is a pixel. A pixel is basically the most basic building block of an image, each pixel is a small square with a color, and an image is made of a rectangle of square pixels. There are many ways you can group pixels, you can use their proximity, their color etc.

If you group the pixels in a smart way, you can use those groups to help in the annotation process. An example is when trying to annotate the backlight of a car. The image below shows there’s a superpixel that represents exactly the backlight, this way the light can be annotated in a single click.

What’s our interest in superpixels ?

As we’ve seen there is a myriad of ways to group pixels but some groupings can actually help us in the annotation process. Thus, if we are able to group pixels in a semantic meaningfully way in superpixels we might be able to accelerate the annotation process.

At Kili we’ve created a brush tool that lets you select superpixels very fast. Our superpixels are also semantically meaningful and their interface is integrated with the semantic segmentation, this means that if you want to make a correction by hand you are also able to.

What are we interested in when using superpixels ?

As we’ve seen there is a myriad of ways to group pixels but some groupings can actually help us in the annotation process. Therefore, if we are able to group pixels in a semantic meaningfully way in superpixels we might be able to accelerate the annotation process.

At Kili we’ve created a brush tool that lets you select superpixels very fast. our superpixels are also semantically meaningful and their interface is integrated with the semantic segmentation, this means that if you want to make a correction by hand you are also able to.

What are we interested in when using superpixels ?

We’ve already talked a little about what we want to have in our superpixels, but it’s very important to make these points clear as they define what our superpixels look like.

  • Respect of semantic : our superpixels should contain meaning, for example, respect shape continuity, color continuity etc. This way it’s expected that you can easily define boudaries by selecting superpixels ideally with no need for a correction by hand.
  • Shape conformity (compactness): the superpixels should try to have similar shapes, this doesn’t mean that they should all have the same pixel count but that they should not be excessively small or have very complex shapes with thin areas [1].
  • Respect color gradients : we expect to see superpixels that are separated by perfect color gradients, this way they define boundaries that can be best used by machine learning models. That aspect is especially interesting because humans are not as good as computers in defining those gradients, this way using superpixels could help you to get more accurate and efficient boundaries.
  • Speed : calculating superpixels can be very computationally intensive. It’s also important that their computation is made in a reasonable time to avoid wasting time compared to human annotations.
  • Different resolutions : it should be possible to change the size of superpixels. It’s a little complicated to define a size but you can think of average size or total number of superpixels. The idea is that when changing resolution superpixels you should be able to make finer annotations. In the end, by changing resolution you should be able to annotate shapes of arbitrary size.
  • They should fit into each other : when increasing the number of superpixels by changing the resolution you should never delete a boundary, just add new ones. Otherwise, you will lose some work when switching resolutions. This is especially a problem because not all algorithms can guarantee this.

How do we calculate them ?

As it’s not the objective of this article, we’ll not get into details on the computation, we’ll describe just in general lines the main methods.

There are two main classes of superpixel computation algorithms, graph based and clustering based. Graph based methods interpret each pixel as a node in a graph and the edges represent affinities, then merging pixels will enable the creation of superpixels. Clustering based techniques use clustering methods to progressively refine clusters of pixels until some criterium is met.

Currently we use state of the art graph based methods and we’re continuously improving our algorithms and the speed of our superpixels. We also adapt our algorithms to make sure the most suitable one is used for each image.

A good starting point if you wan’t to experiment with superpixels is the fast Python implementation of the SLIC algorithm available in [2].

Our superpixels are especially good in tasks where the objects are proportionally small compared to the rest of the image and have well defined colors as shown below. We also have our interactive segmentation tool that should work well in most cases where superpixels are not ideal.

How to use it in Kili’s interface

It’s very simple to use superpixels with Kili’s platform, first you create an image project (any kind via the interface or via the API). Then go to you project’s settings and add an image semantic segmentation job with superpixel enabled as shown in the image below:

Then use the start labeling button to go to the labeling interface. Now, you just need to click in the superpixels button to generate the superpixels.

You can then select a category and select superpixels by dragging or clicking. You can also change the superpixels size to control the level of granularity of the superpixels and be able to have more precision on some parts of the image.

Conclusion

At Kili we strongly believe that the quality of the data is at the heart of a revolution in machine learning. With good data we can make the most of our models and that’s the mission of our platform. Using smart tools such as superpixels will allow you to lower the costs of pixel perfect segmentation and, therefore, train your models with the best data. We invite you to try out our platform and our intelligent tools. If you want more information about our tools you’ll be able to find it at the docs: https://cloud.kili-technology.com/docs/image-interfaces/segmentation/#superpixel.

References

[1] A. Schick, M. Fischer and R. Stiefelhagen, “Measuring and evaluating the compactness of superpixels,” Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), 2012, pp. 930-934.

[2] https://github.com/Algy/fast-slic

[3] Cai, L., Xu, X., Liew, J., & Foo, C. (2021). Revisiting Superpixels for Active Learning in Semantic Segmentation With Realistic Annotation Costs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 10988-10997).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.