Prompt EngineeringThe GroundingDino model utilizes text prompts to encode text into a learned latent space. Modifying the prompts can result in different text features, which can impact the performance of the detector. To improve prediction accuracy, it is recommended to experiment with multiple prompts and select the one that yields the best results. It is important to note that during the writing of this article, several prompts were tried before finding the ideal one, sometimes leading to unexpected outcomes.
Getting Started
To begin, we will clone the GroundingDino repository from GitHub, set up the environment by installing the necessary dependencies, and download the pre-trained model weights.
Clone the repository:
“`html
!git clone https://github.com/IDEA-Research/GroundingDINO.git
“`
Install the dependencies:
“`html
%cd GroundingDINO/
!pip install -r requirements.txt
!pip install -q -e .
“`
Download the pre-trained model weights:
“`html
!wget -q https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth
“`
Inference on an image
We will now explore the object detection algorithm by applying it to a single image of tomatoes. Our initial objective is to detect all the tomatoes in the image using the text prompt “tomato”. If you want to use different category names, you can separate them with a dot (e.g., “tomato.red”).
“`html
python3 demo/inference_on_a_image.py \
–config_file ‘groundingdino/config/GroundingDINO_SwinT_OGC.py’ \
–checkpoint_path ‘groundingdino_swint_ogc.pth’ \
–image_path ‘tomatoes_dataset/tomatoes1.jpg’ \
–text_prompt ‘tomato’ \
–box_threshold 0.35 \
–text_threshold 0.01 \
–output_dir ‘outputs’
“`
Annotations with the ‘tomato’ prompt:
Image by Markus Spiske.
The GroundingDino model not only detects objects as categories, such as “tomato”, but also comprehends the input text, a task known as Referring Expression Comprehension (REC). Let’s change the text prompt from “tomato” to “ripened tomato” and observe the outcome:
“`html
python3 demo/inference_on_a_image.py \
–config_file ‘groundingdino/config/GroundingDINO_SwinT_OGC.py’ \
–checkpoint_path ‘groundingdino_swint_ogc.pth’ \
–image_path ‘tomatoes_dataset/tomatoes1.jpg’ \
–text_prompt ‘ripened tomato’ \
–box_threshold 0.35 \
–text_threshold 0.01 \
–output_dir ‘outputs’
“`
Annotations with the ‘ripened tomato’ prompt:
Image by Markus Spiske.
Remarkably, the model can understand the text and differentiate between a ‘tomato’ and a ‘ripened tomato’. It even tags partially ripened tomatoes that aren’t fully red. If our task requires tagging only fully ripened red tomatoes, we can adjust the box_threshold from the default 0.35 to 0.5:
“`html
python3 demo/inference_on_a_image.py \
–config_file ‘groundingdino/config/GroundingDINO_SwinT_OGC.py’ \
–checkpoint_path ‘groundingdino_swint_ogc.pth’ \
–image_path ‘tomatoes_dataset/tomatoes1.jpg’ \
–text_prompt ‘ripened tomato’ \
–box_threshold 0.5 \
–text_threshold 0.01 \
–output_dir ‘outputs’
“`
Annotations with the ‘ripened tomato’ prompt, with box_threshold = 0.5:
Image by Markus Spiske.
Generation of tagged dataset
While GroundingDino offers remarkable capabilities, it is a large and slow model. If real-time object detection is required, it is recommended to use a faster model like YOLO. However, training YOLO and similar models requires a significant amount of tagged data, which can be expensive and time-consuming to produce. Fortunately, if your data is not unique, you can use GroundingDino to tag it. For more information on efficient YOLO training, refer to my previous article [4].
The GroundingDino repository includes a script to annotate image datasets in the COCO format, which is suitable for YOLOx, among others.
“`html
from demo.create_coco_dataset import main
main(image_directory= ‘tomatoes_dataset’,
text_prompt= ‘tomato’,
box_threshold= 0.35,
text_threshold = 0.01,
export_dataset = True,
view_dataset = False,
export_annotated_images = True,
weights_path = ‘groundingdino_swint_ogc.pth’,
config_path = ‘groundingdino/config/GroundingDINO_SwinT_OGC.py’,
subsample = None)
“`
Parameters:
– export_dataset: If set to True, the COCO format annotations will be saved in a directory named ‘coco_dataset’.
– view_dataset: If set to True, the annotated dataset will be displayed for visualization in the FiftyOne app.
– export_annotated_images: If set to True, the annotated images will be stored in a directory named ‘images_with_bounding_boxes’.
– subsample (int): If specified, only this number of images from the dataset will be annotated.
Different YOLO algorithms require different annotation formats. If you plan to train YOLOv5 or YOLOv8, you will need to export your dataset in the YOLOv5 format. Although the export type is hard-coded in the main script, you can easily change it by adjusting the dataset_type argument in create_coco_dataset.main, from fo.types.COCODetectionDataset to fo.types.YOLOv5Dataset (line 72). To maintain organization, we will also change the output directory name from ‘coco_dataset’ to ‘yolov5_dataset’. After making these changes, run create_coco_dataset.main again.
“`html
if export_dataset:
dataset.export(‘yolov5_dataset’, dataset_type=fo.types.YOLOv5Dataset)
“`
GroundingDino offers a significant advancement in object detection annotations through the use of text prompts. In this tutorial, we have explored how to utilize the model for automated labeling of images or entire datasets. However, it is crucial to manually review and verify these annotations before utilizing them in training subsequent models.
A user-friendly Jupyter notebook containing the complete code is included for your convenience.
Want to learn more?
[1] Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection, 2023.
[2] Dino: Detr with improved denoising anchor boxes for end-to-end object detection, 2022.
[3] An Open and Comprehensive Pipeline for Unified Object Grounding and Detection, 2023.
[4] The practical guide for Object Detection with YOLOv5 algorithm, by Dr. Lihi Gur Arie.
Source link