Loading
Loading
  • Solutions
  • Company
  • Resources
  • Docs

From Manually Reviewing Assets to Programmatic Errors Spotting

Ensuring data quality can be the most difficult part of developing a reliable model. Based on our Data Scientists community, we decided to provide an additional layer to our labeling tool by enabling the addition of plugins to the labeling flow. Discover how.

From Manually Reviewing Assets to Programmatic Errors Spotting

Labeling high-quality data is the biggest challenge

If you have been labeling data, you know that even the most performing labelers can make mistakes: labeling items in the background when it’s not in the instructions, labeling 35 vertebras when humans only have 33, and the list goes on. When building a training dataset, reducing or eliminating labeling errors is essential.

Ensuring data quality can be the most difficult part of developing a reliable model. This is because there is a need for coordination between human intelligence, modeling expertise, project management, and the technology that binds them all together.

Imagine having to label a dataset of 10,000 documents with no ability to automate the review, or safeguard standard labeling guidelines automatically with immediate feedback. This is the majority of labeling teams today.

So, to help labelers accelerate error detection and minimize the error rate, we have asked our free users & customers for their input, and one idea came out frequently.

“I think having the ability to create self-checker based on specific binary tasks would take some of the mental load away from the annotators.”

Machine Learning Engineer

“I’ve custom coded a flow to spot basic errors automatically, like a wrong format for a date or currency. It has helped our annotators”“I’ve custom coded a flow to spot basic errors automatically, like a wrong format for a date or currency. It has helped our annotators”

Machine Learning Engineer

“What would help us is the ability to expand our QA to programmatic and not just do it manually.”

Machine Learning Engineer

Mixing custom code & labeling tools to better label

Many of our users had custom code running in parallel to improve the efficiency and quality of their labeling. We decided to provide an additional layer to our labeling tool by enabling the addition of plugins to the labeling flow.

Now, you can add your own custom plugins to the labeling tool. After uploading your newly-developed plugin, all you need to do is set it up to be triggered on a specific set of events.

This new level of modularity will make your labeling process more efficient and robust.

With a plugin, each time an asset is labeled, errors will be flagged. Many use cases have already been tried. Here, a plugin automatically spots invoices where the TVA number does not start by “FR” or the Currency is not among EUR or USD.

Many other use cases can be thought off:

  • On document processing, a plugin performs Regex checks, field validation, and pair-value checks on extracted entities

  • On video labeling, a plugin controls the maximum size of a bounding box to ensure individual cells are labeled, and not clumps of cells

  • On medical imaging, a plugin to control the minimum and maximum number of vertebrae labeled

And many more are possible!

If you want to give it a go, you can see how to get started here.

In our goal to make labeling high-quality datasets, we’re looking at expanding programmatic QA in labeling, and expanding plugin usage beyond QA: automate workflows, pre-labeling, ML CI/CD orchestration, and more!

Get started

Get Started

Get started! Build better data, now.