Sunday, June 29, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Zero-shot adaptive prompting of large language models – Google Research Blog

November 2, 2023
in AI Technology
Reading Time: 3 mins read
0 0
A A
0
Share on FacebookShare on Twitter



Posted by Xingchen Wan, Student Researcher, and Ruoxi Sun, Research Scientist, Cloud AI Team

Recent advancements in large language models (LLMs) have shown great promise in their ability to solve general problems with just a few examples or even no training on specific tasks. This is particularly impressive in the few-shot setup, where LLMs are given only a few question-answer demonstrations before being tested. The zero-shot setup is even more challenging, as LLMs are directly given the test question without any prior examples. While the few-shot setup has reduced the amount of data needed to adapt a model for specific tasks, it can still be difficult to generate sample prompts. For tasks like summarizing long articles or answering medical questions, it can be challenging to come up with sample answers. In such cases, models with high zero-shot performance can be useful since they don’t require manual prompt generation. However, zero-shot performance is typically weaker as LLMs lack guidance and may produce incorrect outputs.

In our paper “Better Zero-shot Reasoning with Self-Adaptive Prompting” at ACL 2023, we introduce Consistency-Based Self-Adaptive Prompting (COSP) to address this issue. COSP is an automatic zero-shot prompting method that selects and constructs pseudo-demonstrations for LLMs using only unlabeled samples and the model’s own predictions. With COSP, we bridge the performance gap between zero-shot and few-shot setups while maintaining the generality of zero-shot prompting. In our subsequent paper “Universal Self-Adaptive Prompting” (USP) accepted at EMNLP 2023, we extend this idea to a wide range of natural language understanding (NLU) and natural language generation (NLG) tasks and demonstrate its effectiveness.

Prompting LLMs with their own outputs is based on the understanding that LLMs can benefit from demonstrations and have some zero-shot capabilities. However, we need to be cautious as zero-shot solutions may be imperfect and could potentially mislead LLMs. We conducted experiments to show that adding correct demonstrations leads to correct solutions, while adding incorrect demonstrations results in incorrect answers. Therefore, we need to carefully select reliable self-generated demonstrations.

COSP leverages the observation that confident and consistent predictions are more likely to be correct. We propose using the model’s confidence in its output as a proxy for correctness. By considering high-confidence outputs and their inputs as pseudo-demonstrations, we can select robust self-generated demonstrations. We use zero-shot chain-of-thought (CoT) prompting to generate a range of possible rationales and answers, and we measure the uncertainty of the answers through entropy. Answers that have high self-consistency and certainty are more likely to be correct and are selected as pseudo-demonstrations. We use a scoring function that combines consistency, lack of repetition, and diversity to select the best self-generated demonstrations.

COSP focuses on question-answering tasks with CoT prompting, where self-consistency can be easily measured. However, it may be challenging for other tasks like open-ended question-answering or generative tasks. To address this, we introduce USP, which generalizes our approach to other NLP tasks. For classification tasks, we compute the entropy of the logit distribution to measure uncertainty. For short-form generation tasks, we use the same procedure as COSP. For long-form generation tasks, we use an overlap metric based on the average pairwise ROUGE score between different outputs to the same query.

In our experiments, COSP significantly outperforms the standard zero-shot baseline, and USP improves zero-shot performance across a wide range of tasks. We compare against baselines using self-consistency and show that USP is competitive with prompting using golden examples. Our results demonstrate the effectiveness of COSP and USP in improving zero-shot reasoning and general NLP tasks.

Note: The content has been rewritten while keeping the HTML tags intact.



Source link

Tags: AdaptiveBlogGooglelanguageLargemodelspromptingResearchZeroShot
Previous Post

Court rules former Hapoalim execs will pay bank NIS 3.56m

Next Post

Identifying Controversial Pairs in Item-to-Item Recommendations

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
Identifying Controversial Pairs in Item-to-Item Recommendations

Identifying Controversial Pairs in Item-to-Item Recommendations

Messing About with CSS Gradients

Messing About with CSS Gradients

How To Become a TOP Cloud Engineer in 2023 | Tanay Pratap Hindi

How To Become a TOP Cloud Engineer in 2023 | Tanay Pratap Hindi

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
How ‘Chain of Thought’ Makes Transformers Smarter

How ‘Chain of Thought’ Makes Transformers Smarter

May 13, 2024
Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

October 2, 2023
The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

October 30, 2023
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Managing PDFs in Node.js with pdf-lib

Managing PDFs in Node.js with pdf-lib

November 16, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In