Vision-Language Models (VLMs) Evolution
Vision-Language Models (VLMs) have made significant advancements recently, exemplified by the success of OpenAI’s GPT4-V. Recent studies indicate that these models have shown exceptional performance in various vision-language tasks, such as captioning, object localization, multimodal world knowledge, commonsense reasoning, visual question answering (VQA), and vision-based coding.
Previous research has demonstrated that these state-of-the-art (SOTA) VLMs excel in a wide range of vision-based reasoning and understanding tasks. They can extract text from images, comprehend and reason with visual data like tables and charts, and solve basic visual mathematical problems.
In a recent study, a team of researchers from Apple has focused on evaluating the limitations of VLMs, particularly in challenging tasks that require advanced vision-based deduction skills. They used Raven’s Progressive Matrices (RPMs) to assess the VLMs’ capabilities in complex visual reasoning.
RPMs are known for assessing individuals’ multi-hop relational and deductive reasoning skills using only visual cues. Employing techniques like in-context learning, self-consistency, and Chain-of-thoughts (CoT), the team rigorously evaluated several popular VLMs on datasets like the Mensa IQ exam, IntelligenceTest, and RAVEN.
The team discovered a significant gap between the impressive performance of Large Language Models (LLMs) in text-based reasoning tasks and the VLMs’ proficiency in visual deductive reasoning. They highlighted that strategies effective for enhancing LLM performance do not necessarily translate well to visual reasoning problems. The study revealed that VLMs struggle primarily due to difficulties in identifying and understanding abstract patterns in RPM samples.
Key Contributions
- Systematic Evaluation Approach: The team developed a systematic approach to assess Vision-Language Models (VLMs) on Raven’s Progressive Matrices (RPM) problems, utilizing datasets like Mensa IQ exam, IntelligenceTest, and RAVEN for comprehensive evaluation.
- Inference-Time Techniques: By applying common inference-time techniques from LLMs like self-consistency and in-context learning, the team explored the potential of VLMs and found differences in their effectiveness.
- Performance Analysis: A detailed analysis of VLM performance categorized into perception, inference, and hypothesis testing, revealed perception as a significant bottleneck in current VLMs, with specific issues identified through a case study with GPT-4V.
- Identified Issues: The team identified and examined various operational issues in current VLMs, such as overconfidence, prompt design sensitivity, and limitations in utilizing in-context examples effectively. They suggested structured prompts as a potential enhancement strategy.
For more details on this research, please refer to the Paper. All credit for this research goes to the dedicated researchers involved in this project. Stay updated by following us on Twitter. Join our community on Telegram Channel, Discord Channel, and LinkedIn Group.
If you appreciate our work, you’ll love our newsletter. Don’t forget to join our community of over 38k AI enthusiasts on ML SubReddit.
Interested in reaching 1.5 Million AI enthusiasts? Explore collaboration opportunities with us here.
About the Author: Tanya Malhotra is a final year undergrad at the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning. She is a Data Science enthusiast with strong analytical and critical thinking skills, passionate about acquiring new skills, leading teams, and managing tasks efficiently.
🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…