LoadingLoading
Date2022-11-03 21:00
Read5min

Using Kili Technology to work with YOLO v7

In this tutorial, we will show how to work with Kili and YOLO v7 to produce a SOTA-grade object detection system

Using Kili Technology to work with YOLO v7

Working with YOLO v7

To be able to train a model using YOLO v7, you have to create a data.yaml file that indicates the image and label data layout and the classes that you want to detect. You also have to organize your data accordingly. Kili CLI will help you bootstrap this step, and does not require a project-specific setup.

The YOLO v7 data.yaml file

Here is an example of YOLO v7 data.yaml file, from the repository (example: the COCO 2017 dataset).

# COCO 2017 dataset http://cocodataset.org
 
# download command/URL (optional)
download: bash ./scripts/get_coco.sh
 
# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ./coco/train2017.txt  # 118287 images
val: ./coco/val2017.txt  # 5000 images
test: ./coco/test-dev2017.txt  # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794
 
# number of classes
nc: 80
 
# class names
names: [ 'person', 'bicycle’, …]

Enter the Kili CLI

Let’s go through the steps of creating a .yaml file so you can use the Kili data to train a YOLO v7 project.

You can integrate the following steps into a kili_to_yolov7.sh file, or play along with this Colab notebook.

Shape the Kili data

Let’s suppose that you have an object detection project in Kili. For example, we loaded these images from the cyclist Kaggle dataset

Make sure that you set up:

  • The KILI_PROJECT_ID  environment variable (the ID of a Kili project that contains your object detection job). You will use this job’s annotations to train a YOLO v7 object detector.

  • The KILI_API_KEY environment variable that contains the API key of your Kili account.

Once done, you are ready to go!

First, export the annotations with the following command:

kili project export  --project-id $KILI_PROJECT_ID --verbose --output-format
yolo_v7 --output-file your_export.zip

Then unzip the files:

mkdir my_dataset/

unzip -q your_export.zip -d my_dataset/

This will create a dataset folder with the following subfolders:

my_dataset

├── README.kili.txt

├── data.yaml

├── images

└── labels

It is now time to create train, validation, and test datasets that are needed to make yolo_v7 happy, with a random strategy (50% train, 30% val, 20% test). The following command randomly dispatches each asset path into one file among train.txt, test.txt, and val.txt.

find `pwd`/my_dataset/images -name "*.jpg" | awk '{v=rand();if(v<0.5) {print >
"train.txt"} else if (v>0.8) {print > "test.txt"} else {print > "val.txt"}}'

And finally, you can add the dataset files to the data.yaml file created:

echo "train: /path/to/my_dataset/train.txt" >> /path/to/my_dataset/data.yaml
echo "val: /path/to/my_dataset/val.txt" >> /path/to/my_dataset/data.yaml
echo "test: /path/to/my_dataset/test.txt" >> /path/to/my_dataset/data.yaml

In the end, your data.yaml should look like this:

names: ['BICYCLE']

train: /path/to/my_dataset/train.txt

val: /path/to/my_dataset/val.txt

test: /path/to/my_dataset/test.txt

Training a YOLO v7 model

To use YOLO v7, you first need to install the YOLO v7 repository following these instructions, and make sure to download the initial model weights with:

wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt

We are now ready to use Yolov7! First, you can run Yolo V7 training using:

python train.py --workers 8  --batch-size 16 --data /path/to/your/data.yaml --img 320 320 --cfg cfg/training/yolov7-tiny.yaml --weights yolov7.pt --name yolov7 --hyp data/hyp.scratch.tiny.yaml 

Importing predictions back into Kili

Once done, you can import the predictions back into Kili. This can help you diagnose labeling or model errors, or serve as a source of pre-annotations.

First, run the YOLO v7 detection using the model you have generated in the previous step:

python detect.py --weights path/to/the/trained/model/best.pt --conf 0.25 --img-size 640 --source /content/my_dataset/labels/ --save-txt --project path/where/to/store/labels

Then import the results into Kili. You need to specify a model name of your choice to be able to identify the predictions, and the target job in your original project ontology (here, JOB_0).

kili project label path/where/to/store/labels/exp/labels/ --project-id $KILI_PROJECT_ID --prediction --model-name my-YOLO-v7 --metadata-file my_dataset/data.yaml --target-job JOB_0 --input-format yolo_v7 --api-key=$KILI_API_KEY

Now you can visualize the predictions in the Kili interface!

Visualizing-the-predictions-in-kili-technology-interface

Note that you can also specify the project id of another project, provided that it has the same ontology as the original project. This way, you can use the model-generated annotations to bootstrap human annotations. A human annotator will only have to validate or correct the model’s predictions. If these predictions are accurate enough, the ratio validation/correction will be high, saving you a great deal of annotation time.

Conclusion

In this tutorial, we showed you how to export the data of an object detection project created in Kili to the Yolo V7 format, train a model with Yolo V7, and then push back the predictions to a Kili project. Note that you can also do the same operations with Yolo V4 and V5 compatible models; simply change the label format in Kili import and export!


An article by Pierre Leveau, Lead Machine Engineer at Kili Technology

Related resources

Get started

Get Started

Get started! Build better data, now.