Below is our study based on the WSDM 2023 Toloka VQA Challenge, with the content rewritten and HTML tags preserved:
Our analysis is centered around the outcomes of the WSDM Cup 2023 Toloka Visual Question Answering (VQA) Challenge. It has been a year since the competition took place, and as anticipated, the winning machine-learning solution did not surpass the human baseline. However, the past year has seen significant advancements in Generative AI. There has been a constant stream of articles highlighting the limitations and successes of OpenAI’s GPT models.
Since the fall of 2023, GPT-4 Turbo has been equipped with “vision” capabilities, allowing it to process images and participate directly in VQA challenges. We were interested in evaluating its performance against the human baseline in our Toloka challenge, to determine if the disparity has narrowed.
Visual Question Answering (VQA) is a cross-disciplinary field of artificial intelligence that focuses on enabling AI systems to comprehend images and respond to related questions in natural language. The applications of VQA are diverse, ranging from assisting visually impaired individuals to enhancing educational content and improving image and video search functionalities.
The development of VQA comes with a responsibility to ensure the reliability and safety of the technology’s applications. With AI systems gaining vision capabilities, there is an increased risk of misinformation, as images paired with false information can lend credibility to misleading statements.
One subfield of VQA, VQA Grounding, not only involves answering visual questions but also connecting those answers to specific elements within the image. This subfield has promising applications in areas such as Mixed Reality (XR) headsets, educational tools, and e-commerce, enhancing the user experience by directing attention to specific image components. The Toloka VQA Challenge aimed to advance the development of VQA grounding.
In the Toloka VQA Challenge, participants were tasked with identifying a single object in an image and outlining it in a bounding box based on a question related to the object’s functionality rather than its visual attributes. This approach mirrors human perception, where objects are often recognized based on their utility rather than their visual characteristics.
The challenge required the integration of visual, textual, and common sense knowledge. While our proposed approach combined YOLOR and CLIP models as separate visual and textual components, the winning solution diverged from this approach, opting for the Uni-Perceiver model with a ViT-Adapter for improved localization. Despite achieving a high final score, it fell short of the crowdsourcing baseline.
Given the significant disparity between human and AI solutions, we were eager to evaluate how GPT-4V would fare in the Toloka VQA Challenge. Since the challenge was based on the MS COCO dataset, familiar to GPT-4 from its training data, there was a possibility that GPT-4V could approach the human baseline.
Our study delved into GPT-4V’s ability to reason about basic object locations in images. While GPT-4V could identify objects correctly, it struggled with providing accurate bounding box coordinates. This limitation prompted us to explore how well GPT-4V could discern high-level object locations in images.
Our evaluation involved randomly selecting 500 image-question pairs from the competition’s private test dataset. We implemented two different approaches for labeling object locations in images, one based on automated heuristics and the other on crowdsourced labeling. These methods aimed to assess GPT-4V’s spatial reasoning abilities in identifying object locations within images.