Sunday, June 29, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Introducing ASPIRE for selective prediction in LLMs – Google Research Blog

January 18, 2024
in AI Technology
Reading Time: 2 mins read
0 0
A A
0
Share on FacebookShare on Twitter



Posted by Jiefeng Chen, Student Researcher, and Jinsung Yoon, Research Scientist, Cloud AI Team

In the ever-changing world of artificial intelligence, large language models (LLMs) have revolutionized our interactions with machines. They have pushed the boundaries of natural language understanding and generation to new heights. However, the challenge of using LLMs in high-stakes decision-making applications remains due to the uncertainty of their predictions. Traditional LLMs lack a mechanism to assign confidence scores to their responses, making it difficult to distinguish between correct and incorrect answers.

Selective prediction aims to address this issue by enabling LLMs to output an answer along with a selection score that indicates the probability of the answer being correct. This allows us to better understand the reliability of LLMs in various applications. Previous research has attempted to enable selective prediction in LLMs using heuristic prompts, but these approaches may not work well in challenging question answering tasks.

In our paper “Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs,” presented at Findings of EMNLP 2023, we introduce ASPIRE, a framework designed to enhance the selective prediction capabilities of LLMs. ASPIRE fine-tunes LLMs on question answering tasks and trains them to evaluate the correctness of their generated answers. This allows LLMs to output answers along with confidence scores. Our experimental results show that ASPIRE outperforms state-of-the-art selective prediction methods on various question answering datasets.

The ASPIRE framework involves three stages: task-specific tuning, answer sampling, and self-evaluation learning. Task-specific tuning fine-tunes the LLM to improve its prediction performance on a specific task. Answer sampling generates different answers for each training question, and self-evaluation learning trains the LLM to distinguish between correct and incorrect answers.

To implement the ASPIRE framework, we use soft prompt tuning, a mechanism for conditioning frozen language models to perform specific tasks more effectively. We train adaptable parameters (θp and θs) using soft prompt tuning and use beam search decoding to generate answers. We then combine the likelihood of the generated answer with the learned self-evaluation score to compute selection scores for selective predictions.

Our evaluation of ASPIRE on question answering datasets shows improved accuracy compared to baseline methods. The results also suggest that smaller LLMs with strategic adaptations can match or surpass the accuracy of larger models in some scenarios. ASPIRE-enhanced models outperform larger models in selective prediction tasks, highlighting the effectiveness of ASPIRE in improving the performance of LLMs.

ASPIRE represents a shift in the landscape of LLMs, showing that the capacity of a language model is not the sole determinant of its performance. By strategically adapting LLMs, we can improve their precision and confidence, even in smaller models. ASPIRE paves the way for LLMs that can make more reliable and self-aware predictions, making them trustworthy partners in decision-making.

In conclusion, ASPIRE is a vision of a future where LLMs can be trusted partners in decision-making. By enhancing selective prediction performance, we move closer to realizing the full potential of AI in critical applications. We invite the community to build upon our research and join us in this exciting journey towards creating a more reliable and self-aware AI.

Acknowledgments: We would like to acknowledge the contributions of Sayna Ebrahimi, Sercan O Arik, Tomas Pfister, and Somesh Jha.



Source link

Tags: ASPIREBlogGoogleIntroducingLLMspredictionResearchSelective
Previous Post

Deploy a Static Site to AWS (S3 + CDN + R53) in One Step

Next Post

DeepSeek-AI Proposes DeepSeekMoE: An Innovative Mixture-of-Experts (MoE) Language Model Architecture Specifically Designed Towards Ultimate Expert Specialization

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
DeepSeek-AI Proposes DeepSeekMoE: An Innovative Mixture-of-Experts (MoE) Language Model Architecture Specifically Designed Towards Ultimate Expert Specialization

DeepSeek-AI Proposes DeepSeekMoE: An Innovative Mixture-of-Experts (MoE) Language Model Architecture Specifically Designed Towards Ultimate Expert Specialization

Androxgh0st Malware Botnet Steals AWS, Microsoft Credentials and More

Androxgh0st Malware Botnet Steals AWS, Microsoft Credentials and More

Kodeco Podcast: Putting AI to Use in Software Development (V2, S2 E3)

Kodeco Podcast: Putting AI to Use in Software Development (V2, S2 E3)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
How ‘Chain of Thought’ Makes Transformers Smarter

How ‘Chain of Thought’ Makes Transformers Smarter

May 13, 2024
Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

October 2, 2023
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

October 30, 2023
How To Build A Quiz App With JavaScript for Beginners

How To Build A Quiz App With JavaScript for Beginners

February 22, 2024
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In