Posted by Han Zhou, Student Researcher, and Subhrajit Roy, Senior Research Scientist, Google Research
Prompting large language models (LLMs) has become an efficient learning paradigm for adapting LLMs to a new task by conditioning on human-designed instructions. However, the predictions of LLMs are highly sensitive and biased to the choice of templates, label spaces, and demonstration examples, which can result in unexpected performance degradation and barriers for robust LLM applications. To address this problem, calibration methods have been developed to mitigate these biases and improve LLM performance.
In our study, “Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering,” we analyze existing calibration methods and propose a new method called Batch Calibration (BC). BC is a simple yet effective technique that mitigates bias, unifies prior approaches, and addresses the limitations of previous methods. It is zero-shot, self-adaptive, and incurs minimal additional costs. We validate the effectiveness of BC using PaLM 2 and CLIP models and demonstrate state-of-the-art performance across various natural language understanding and image classification tasks.
Motivated by the need for practical guidelines in calibration, we first analyze the limitations of current methods and frame the calibration problem as an unsupervised decision boundary learning problem. We find that uncalibrated LLMs can be biased towards predicting certain classes unfairly given the context, and linear decision boundaries tend to be more robust and generalizable. Additionally, relying on additional content-free inputs for estimating contextual bias is not always optimal and can introduce additional bias.
Based on these findings, we design BC as a zero-shot, inference-only, and generalizable calibration technique with negligible computation cost. The key component of BC is accurately estimating the contextual bias, which is achieved by using a linear decision boundary and estimating bias for each class from a batch of inputs. The calibrated probability is obtained by dividing the output probability by the contextual prior. BC requires no additional inputs to estimate bias and can be computed once all test samples are seen or in an on-the-fly manner.
In our experiments, we evaluate BC on a range of natural language understanding and image classification tasks. BC consistently outperforms existing methods, including prototypical calibration and contextual calibration, across different LLM models and task variants. It improves performance even when varying the number of few-shot input-label pairs and demonstrates stability with increased shots. We also visualize the decision boundaries to illustrate the effectiveness of BC compared to other methods.
We further analyze the robustness of BC with respect to prompt engineering design choices, such as the order of in-context examples, prompt templates, and label space. BC is found to be more robust and can achieve similar performance with different examples, orders, and prompt templates. It also remains effective even with unconventional label space designs.
Lastly, we study the impact of batch size on BC performance and find that it is remarkably sample efficient compared to other methods. BC achieves strong performance with a small number of unlabeled samples, making it easier for prompt engineering.
In conclusion, we provide a unified analysis of calibration methods, propose BC as an effective technique, and demonstrate its superiority over existing methods. BC improves the robustness of LLMs and makes prompt engineering easier. It is applicable to both language-only and vision-language tasks and achieves state-of-the-art performance in various scenarios.
Source link