Loading
Loading
  • Solutions
  • Company
  • Resources
  • Docs

How to automate QA in image annotation to build high quality image datasets faster

Automate image annotation with SAM for accurate datasets! Learn how programmatic QA ensures high-quality labels faster.

How to automate QA in image annotation to build high quality image datasets faster

Advances on Image Annotation for Machine Learning

  1. Satellite and Aerial Imagery: Satellite and aerial imagery is useful for applications such as precision agriculture, disaster management, land cover classification and environmental monitoring. These annotated images are of immense benefits as they help in showcasing the model which parts are relevant and of interest.

  2. Medical Imaging: Image annotations on medical data help in diagnosing and predicting various illnesses. Some of the applications are tumor detection, cancer detection, disease classification, and analyzing the progress of treatments.

  3. Autonomous Vehicles: Autonomous vehicles work on machine learning based perception systems which identify various objects in the surrounding. Annotated images are useful for tasks such as pedestrian detection, vehicle detection, lane detection, detection of traffic signs and other obstacles.

  4. E-commerce and Retail: E-commerce and retail industry function online with the help of visual search algorithms. Image annotations assist in enhancing the visual search capabilities by tagging products within the images making it easy to recommend relevant items to the users.

  5. Security and Surveillance: Image annotations in the security and surveillance industry help in monitoring the surroundings and preventing thefts or accidents. It assists with tasks such as crowd detection, person/activity of interest detection, vehicle tracking, night vision and face identification.

Over the years, image annotation has evolved in a lot of ways. Earlier, annotations could only be done manually which was labor intensive and was completely dependent on the expert human annotators to provide accurate labels or regions or interest. This approach had very limited scalability and required tons of time to label just a few images. Recently, with the advancements in machine learning and computer vision algorithms, the process of annotation can be assisted by using these algorithms making it a completely automatic or semi-automatic process. Automated parts of annotation workflows make the process faster and more efficient.

With the expansion of AI applications across industries, rise of high-quality image capturing devices, and the extensive research going on in the domain of deep learning have made it more important than before to have high quality labeled data. As machine learning models are data dependent, it is necessary to feed high quality input data in order to gain good quality output. Additionally, since AI applications are being used in fields such as healthcare and autonomous driving in which a small failure can cause major risks, it is extremely important to ensure that the models are trained using high-quality labeled data.

If you want to learn more about image annotation pricing, we have an article here.

The Impact of SAM on Image Annotation

Segment Anything Model (SAM) is a foundation model built by Meta for computer vision tasks which generates segmentation masks for almost any kind of object. For image annotation, this model can be used in a zero or one shot setting to generate accurate annotations in real time even for objects it has never seen before. It helps speed up the labeling process by automatically generating a wide range of masks with a single click which can be later fine tuned as per requirement. Additionally, using SAM results in improved accuracy, faster annotation, ease of scalability, and provides easier segmentation of complex shapes.

Kili Technology embeds SAM into its data labeling platform to allow data labellers speed up their annotation tasks. The Interactive Semantic Segmentation capability of Kili Technology runs SAM in the background to generate high quality masks and allows annotators to save a lot of time by not defining boundaries of each target manually. SAM-embedding labeling technology allows all-purpose auto labeling with a single click, labeling objects with multiple parts of different granularities, and annotations of partially occluded objects. The input prompt for SAM can either be a click or a bounding box to easily adjust the created masks. In order to add a specific region to the mask, one can click in the center of the region and the masked region will be automatically added. Likewise, it’s also easy to remove a region by pressing and holding Alt/Option key and then clicking in the center of the region to be removed whereas pressing an Escape key helps remove the mask completely. An alternative and faster way to annotate objects with Kili’s SAM implementation is to use the bounding box tool which more accurately helps target the object which you want to annotate.

Challenges in Automating Image Annotation

Despite the advancements brought by SAM, there are several persisting challenges in automating the image annotation.

●   SAM struggles with some domain-specific images which causes inaccuracies in the annotations.

●   SAM works fine with sharp objects in the foreground but it can struggle with blurry or unclear objects in the background.

●   When objects look similar to their surroundings and are camouflaged or transparent, SAM fails to work accurately.

●   In tasks such as background image segmentation, SAM tends to favor foreground masks which affects its performance in problems such as shadow detection.

●   SAM performs poorly in segmenting low quality images and it struggles to detect low resolution (tiny) objects in scenarios such as aerial imagery.

Hence, we need an approach which can make best use of SAM while overcoming these challenges with least human intervention. This is where programmatic QA comes into picture. Programmatic QA implements automatic checks and balances which helps to ensure quality of annotations and assists building high-quality datasets faster.

Get started

Learn More

Discover how foundation models are revolutionizing AI development and how you can overcome their limitations in real-life applications. Join us for an insightful webinar where we'll delve into the common mistakes and challenges of models like GPT-4 and SAM.

Programmatic QA: The key to building high-quality datasets faster

Programmatic QA is a method for automating quality checks in the annotation process. With Kili Technology, you can use plugins to automate your quality checks. You can directly write your business rules in a python script and upload them to Kili to have each new label automatically checked. To better understand the concept of Kili plugins for programmatic QA let’s consider a simple example project of processing invoices. One of the jobs for the project is to check that the payment information is correct by ensuring whether it has a proper IBAN number (must start with FR) and the currency is one of ‘EUR’ or ‘USD’. These rules are checked when the annotators click on submit. If any of the rules is compromised, issues will be added to the asset and it will be sent back to the labeling queue.

Implementing Programmatic QA with Kili Technology

Let’s look at how to create a plugin for programmatic QA through a step-by-step process with Kili Technology on the following use-case: making sure that there is at most one instance of an “Object A” bounding box in an image. A similar process can be applied to count the polygons annotated by SAM.

Step 1: Instantiate Kili

We will install the primary Kili library which has got all the classes and functions we will use further.

%pip install kili

 from kili.client import Kili
 kili = Kili()

Step 2: Create the project

Firstly, we will create a new project of type ‘IMAGE’. It creates a json interface to define bounding boxes for two objects, object A and object B.

json_interface = {
 	"jobs": {
     	"JOB_0": {
         	"content": {
             	"categories": {
                 	"OBJECT_A": {
                     	"children": [],
                     	"name": "Object A",
                     	"color": "#733AFB",
               	      "id": "category1",
                 	},
                 	"OBJECT_B": {
                     	"children": [],
                     	"name": "Object B",
                     	"color": "#3CD876",
                     	"id": "category2",
                     },
             	},
             	"input": "radio",
         	},
         	"instruction": "Categories",
         	"isChild": False,
         	"tools": ["rectangle"],
         	"mlTask": "OBJECT_DETECTION",
         	"models": {},
         	"isVisible": True,
         	"required": 1,
         	"isNew": False,
     	}
 	}
 }

Here, we will define the title, description, and input type of the project. Next, the function ‘create_project()’ is called to create a new project and project id is extracted for later use.

title = "Plugins test project"
 description = "My first project with a plugin"
 input_type = "IMAGE"

 project = kili.create_project(
 	title=title, description=description, input_type=input_type, json_interface=json_interface
 )
 project_id = project["id"]

 print(f"Created project {project_id}")

Now, we will upload an asset and keep track of the asset id.

content_array = ["https://storage.googleapis.com/label-public-staging/car/car_1.jpg"]
 names_array = ["landscape"]

 kili.append_many_to_dataset(
 	project_id=project_id, content_array=content_array, external_id_array=names_array
 )

asset_id = kili.assets(project_id=project_id, fields=["id"], disable_tqdm=True)[0]["id"]

This project contains one job of bounding box creation with two categories.

Step 3: Write the plugin

Here, we define the method ‘check_rules_on_label()’ to make sure that there is only one bounding box for category A. It prevents labelers from creating multiple bounding boxes for a single object. Next, we define a class called ‘PluginHandler’ which contains a method ‘on_submit()’ which is called upon pressing the submit button. It uses the predefined plugins to check for any issues with annotation. In case any of the rules are being violated, the asset is sent back to the queue of issues.

from kili.plugins import PluginCore
 from typing import Dict, List, Optional

 def check_rules_on_label(label: Dict) -> List[Optional[str]]:
 	#custom methods
 	print('Custom method - checking number of bboxes')

 	counter = 0
 	for annotation in label['jsonResponse']["JOB_0"]["annotations"]:
     	if annotation["categories"][0]["name"] == "OBJECT_A":
         	counter += 1

 	if counter <= 1:
     	return []
 	return [f"There are too many BBox ({counter}) - Only 1 BBox of Object A accepted"]


 class PluginHandler(PluginCore):
 	"""
 	Custom plugin instance
 	"""

 	def on_submit(self, label: Dict, asset_id: str) -> None:
     	"""
     	Dedicated handler for Submit action
     	"""
     	self.logger.info("On submit called")

     	issues_array = check_rules_on_label(label)

     	project_id = self.project_id

     	if len(issues_array) > 0:
         	print("Creating an issue...")

         	self.kili.create_issues(
             	project_id=project_id,
             	label_id_array=[label['id']] * len(issues_array),
                 text_array=issues_array,
         	)

         	print("Issue created!")

             self.kili.send_back_to_queue(asset_ids=[asset_id])

The code below retrieves URLs for the plugin files.

import urllib.request
 from pathlib import Path

 plugin_folder = "plugin_folder"

 Path(plugin_folder).mkdir(parents=True, exist_ok=True)
 urllib.request.urlretrieve(
 	"https://raw.githubusercontent.com/kili-technology/kili-python-sdk/main/recipes/plugins_library/plugin_image.py",
 	"plugin_folder/main.py",
 )

Step 4 (a): Upload the plugin from a folder

With the plugin defined in a separate Python file, you can create a folder containing:

  • A main.py file which is the entrypoint of the plugin with a must-have PluginHandler class which implements a PluginCore class.

  • (Optionally) a requirements.txt (if you need specific PyPi packages in your plugin)

folder/
  	main.py
  	requirements.txt
  • The upload will create the necessary builds to execute the plugin (might take a few minutes)

  • After the activation, you can start using your plugin right away.

Here is an example of a requirements.txt file: numpy scikit-learn pandas==1.5.1 git+https://github.com/yzhao062/pyod.git

requirements_path = Path(plugin_folder) / "requirements.txt"

 packages_list = [
 	"numpy\n",
 	"scikit-learn\n",
 	"pandas==1.5.1\n",
 	"git+https://github.com/yzhao062/pyod.git\n",
 ]

 with requirements_path.open("w") as f:
 	f.writelines(packages_list)
plugin_name = "Plugin bbox count"
 from kili.exceptions import GraphQLError

 try:
 	kili.upload_plugin(plugin_folder, plugin_name)
 except GraphQLError as error:
 	print(str(error))
kili.activate_plugin_on_project(plugin_name=plugin_name, project_id=project_id)

Step 4 (b): Upload the plugin from a ‘.py’ file

One can also create plugins directly from a .py file.

●     The upload will create the necessary builds to execute the plugin (might take a few minutes)

● After the activation, you can start using your plugin right away.

path_to_plugin = Path(plugin_folder) / "main.py"
 plugin_name_file = "Plugin bbox count - file"

 try:
     kili.upload_plugin(str(path_to_plugin), plugin_name_file)
 except GraphQLError as error:
 	print(str(error))
kili.activate_plugin_on_project(plugin_name=plugin_name_file, project_id=project_id)

Step 5: Plugin in action

Once the plugin is successfully deployed, it can be tested by labeling in the Kili interface or uploading the following label. If the added label contains errors, a new issue will be automatically created in the Kili app.

json_response = {
 	"JOB_0": {
     	"annotations": [
         	{
             	"boundingPoly": [
                 	{
                     	"normalizedVertices": [
                         	{"x": 0.15, "y": 0.84},
                         	{"x": 0.15, "y": 0.31},
                         	{"x": 0.82, "y": 0.31},
                         	{"x": 0.82, "y": 0.84},
                     	]
                 	}
             	],
             	"categories": [{"name": "OBJECT_A"}],
             	"children": {},
             	"mid": "20221124161451411-13314",
             	"type": "rectangle",
         	},
         	{
             	"boundingPoly": [
                 	{
                     	"normalizedVertices": [
                         	{"x": 0.79, "y": 0.20},
                         	{"x": 0.79, "y": 0.13},
                         	{"x": 0.91, "y": 0.13},
                         	{"x": 0.91, "y": 0.20},
                     	]
                 	}
             	],
             	"categories": [{"name": "OBJECT_A"}],
             	"children": {},
             	"mid": "20221124161456406-47055",
             	"type": "rectangle",
         	},
         	{
             	"boundingPoly": [
                 	{
                     	"normalizedVertices": [
                         	{"x": 0.87, "y": 0.36},
                         	{"x": 0.87, "y": 0.27},
                         	{"x": 0.99, "y": 0.27},
                         	{"x": 0.99, "y": 0.36},
                     	]
                 	}
             	],
             	"categories": [{"name": "OBJECT_A"}],
             	"children": {},
             	"mid": "20221124161459298-45160",
             	"type": "rectangle",
         	},
     	]
 	}
 }

 
kili.append_labels(
     json_response_array=[json_response], asset_id_array=[asset_id], label_type="DEFAULT"
 )

If you use the base plugin provided, the plugin should:

  • Create an issue with information that three bounding boxes were found, instead of one.

  • Send the asset back to the labeling queue (status ONGOING).

print(
 	kili.assets(project_id=project_id, asset_id=asset_id, fields=["status", "issues.comments.text"])
 )

 print(
 	f"Go to my project: {kili.api_endpoint.split('/api')[0]}/label/projects/{project_id}/menu/queue"
 )

Let's now post a proper label without errors:

json_response = {
 	"JOB_0": {
     	"annotations": [
         	{
             	"boundingPoly": [
                 	{
                     	"normalizedVertices": [
                         	{"x": 0.15, "y": 0.84},
                         	{"x": 0.15, "y": 0.31},
                         	{"x": 0.82, "y": 0.31},
                         	{"x": 0.82, "y": 0.84},
                     	]
                 	}
             	],
             	"categories": [{"name": "OBJECT_A"}],
             	"children": {},
             	"mid": "20221124161451411-13314",
             	"type": "rectangle",
         	}
     	]
 	}
 }
 kili.append_labels(
     json_response_array=[json_response], asset_id_array=[asset_id], label_type="DEFAULT"
 )

 print(kili.assets(project_id=project_id, asset_id=asset_id, fields=["status"]))

 print(
 	f"Go to my project: {kili.api_endpoint.split('/api')[0]}/label/projects/{project_id}/menu/queue"
 )

The status of your asset should have now changed to LABELED. In this plugin, previous issues remain but they can be solved through the API as well.

Well done! You can now iterate on the script. To learn how to avoid latency when building and deploying your plugin, refer to the plugins development tutorial.

Step 6: Monitor the plugin

The following code can be used to monitor the logs of a certain plugin.

import json
 from datetime import date, datetime

 dt = (
 	date.today()
 )  # You can change this date if needed, or omit it to set it at the plugin creation date
 start_date = datetime.combine(dt, datetime.min.time())

 logs = kili.get_plugin_logs(project_id=project_id, plugin_name=plugin_name, start_date=start_date)

 logs_json = json.loads(logs)
 print(json.dumps(logs_json, indent=4))

Step 7: Manage the plugin

To get the list of all uploaded plugins in your organization, refer to the code below.

plugins = kili.list_plugins()

The following code can be used to monitor the logs of a certain plugin.

# Insert the path to the updated plugin
 new_path_to_plugin = Path(plugin_folder) / "main.py"

 # Change to True if you want to update the plugin
 should_update = False

 if should_update:
     kili.update_plugin(plugin_name=plugin_name, plugin_path=str(new_path_to_plugin))

Deactivate the plugin on a certain project (the plugin can still be active for other projects):

kili.deactivate_plugin_on_project(plugin_name=plugin_name, project_id=project_id)

Delete the plugin completely (deactivates the plugin from all projects):

if delete_plugin_from_org:
     kili.delete_plugin(plugin_name=plugin_name)

Additional tips on ensuring high quality data with Kili Technology

Consensus Overview:

Consensus is a term used when there are more than one labelers annotating the same asset. After the completion of labeling the asset, a consensus score is calculated which measures the level of agreement between different annotations for the same asset. This is an important measure which helps maintain the quality of label production.

For example, in image segmentation, multiple annotators who participate in the consensus overview can annotate the segmentation masks for the same image. Later, these multiple annotations are combined into a single segmentation mask wherein a consensus score would help to determine the similarity of these annotations. Different masks generated by different people help in capturing the varying possibilities of annotation and help achieve a better final result through aggregation.

The process of consensus usually follows the following steps. Initially, the assets that have been selected by the algorithm to participate in consensus are placed at the top of the stack holding an ONGOING status. When they have been annotated by all the defined number of labelers, their status is updated to LABELED.

Example of a project with consensus 100% for 2 people:

  1. If an asset ‘A’ is chosen for consensus and it has two labels then it means the status of ‘A’ is LABELED.

  2. If the project is updated to consensus 100% for three people then the status of ‘A’ would now be ONGOING.

Example of a project with consensus 100% for 3 people:

  1. If an asset ‘B’ is chosen for consensus and it has two labels then it means the status of ‘B’ is ONGOING.

  2. If the project is updated to 100% for two people then the status of ‘B’ would now become LABELED.

For information on label statuses, refer to Asset lifecycle.

Honeypot Overview:

Honeypot (a.k.a. gold standard) is a tool for measuring the accuracy of the annotations by auditing the work of labelers. It aims to calculate the agreement level between the ground truth and the annotations created by labelers by interspersing both the assets in the annotation queue. Assets used as Honeypot are intelligently distributed and sent for annotation to all project labellers.

For example, in terms of image segmentation, a honeypot could be an actual ground truth image with correct segmentation. This image would then be combined with the actual dataset - queue of images to be labeled. Now since the honeypot image will be unknown to the labeller they will label the image just like all other images and then those annotations will be compared to the ground truth image which will generate a honeypot score. This score will be used to measure the accuracy of the labeller.

For details on how to set assets as Honeypot, refer to How to use honeypot in your project.

Honeypot computation rules

The computation of Honeypot takes place under two conditions:

  • The labeled asset is marked as a Honeypot (programmatically, isHoneypot property is set to True).

  • A label of type Review exists for this asset which will be used as the ground truth. If more than one label with type Review exists, then the last label will be used as the ground truth.

Once these two conditions are fulfilled, this asset will be seen randomly by the members of the project in their annotation queue.

Lastly, when a labeller submits a label, the Honeypot metrics are updated.

Conclusion: The future of image annotation

To conclude, the future of image annotation will be defined by the advanced automated techniques such as SAM as they help to generate accurate labels faster in a zero shot setting without the need of training the model on a specialized dataset. Such methods help in saving a lot of time as compared to manual annotations. However, since these methods have their own challenges and considerations, it is important to keep a check on the quality of annotations generated by them. Kili Technology provides an awesome feature for implementing plugins for programmatic QA which performs automatic checks and balances. This helps to ensure that the  quality of labels is maintained and good-quality datasets can be generated faster.

Get started

Get Started

Get started! Build better data, now.