Posted by Jiefeng Chen, Student Researcher, and Jinsung Yoon, Research Scientist, Cloud AI Team
In the ever-changing world of artificial intelligence, large language models (LLMs) have revolutionized our interactions with machines. They have pushed the boundaries of natural language understanding and generation to new heights. However, the challenge of using LLMs in high-stakes decision-making applications remains due to the uncertainty of their predictions. Traditional LLMs lack a mechanism to assign confidence scores to their responses, making it difficult to distinguish between correct and incorrect answers.
Selective prediction aims to address this issue by enabling LLMs to output an answer along with a selection score that indicates the probability of the answer being correct. This allows us to better understand the reliability of LLMs in various applications. Previous research has attempted to enable selective prediction in LLMs using heuristic prompts, but these approaches may not work well in challenging question answering tasks.
In our paper “Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs,” presented at Findings of EMNLP 2023, we introduce ASPIRE, a framework designed to enhance the selective prediction capabilities of LLMs. ASPIRE fine-tunes LLMs on question answering tasks and trains them to evaluate the correctness of their generated answers. This allows LLMs to output answers along with confidence scores. Our experimental results show that ASPIRE outperforms state-of-the-art selective prediction methods on various question answering datasets.
The ASPIRE framework involves three stages: task-specific tuning, answer sampling, and self-evaluation learning. Task-specific tuning fine-tunes the LLM to improve its prediction performance on a specific task. Answer sampling generates different answers for each training question, and self-evaluation learning trains the LLM to distinguish between correct and incorrect answers.
To implement the ASPIRE framework, we use soft prompt tuning, a mechanism for conditioning frozen language models to perform specific tasks more effectively. We train adaptable parameters (θp and θs) using soft prompt tuning and use beam search decoding to generate answers. We then combine the likelihood of the generated answer with the learned self-evaluation score to compute selection scores for selective predictions.
Our evaluation of ASPIRE on question answering datasets shows improved accuracy compared to baseline methods. The results also suggest that smaller LLMs with strategic adaptations can match or surpass the accuracy of larger models in some scenarios. ASPIRE-enhanced models outperform larger models in selective prediction tasks, highlighting the effectiveness of ASPIRE in improving the performance of LLMs.
ASPIRE represents a shift in the landscape of LLMs, showing that the capacity of a language model is not the sole determinant of its performance. By strategically adapting LLMs, we can improve their precision and confidence, even in smaller models. ASPIRE paves the way for LLMs that can make more reliable and self-aware predictions, making them trustworthy partners in decision-making.
In conclusion, ASPIRE is a vision of a future where LLMs can be trusted partners in decision-making. By enhancing selective prediction performance, we move closer to realizing the full potential of AI in critical applications. We invite the community to build upon our research and join us in this exciting journey towards creating a more reliable and self-aware AI.
Acknowledgments: We would like to acknowledge the contributions of Sayna Ebrahimi, Sercan O Arik, Tomas Pfister, and Somesh Jha.
Source link