Alignment is a crucial concern in the development of text-based assistants, particularly in ensuring that they align with human values. This alignment aims to improve the accuracy, coherence, and harmlessness of the content generated by these assistants in response to user queries. The alignment process involves feedback acquisition, alignment algorithms, and model evaluation. While previous efforts have focused on alignment algorithms, this study examines the nuances of feedback acquisition, specifically comparing ratings and rankings protocols. It highlights a significant consistency challenge in existing literature.
Alignment algorithms such as PPO, DPO, and PRO have been extensively explored, but feedback acquisition strategies have focused on developing fine-grained and dense protocols, which can be challenging and costly. This study analyzes the impact of two feedback protocols, ratings and rankings, on alignment. It delves into the problem of feedback inconsistency and proposes a method to convert ratings data into rankings data for further analysis. The study reveals consistency issues in both human and AI feedback data, suggesting that a substantial portion of the feedback can yield contradictory preferences depending on the feedback protocol employed.
The study explores the feedback inconsistency problem by comparing ratings and rankings data and assessing the agreement between them. It emphasizes the variations in perceived response quality based on feedback acquisition protocols and the impact of these protocols on the alignment pipeline. The study also investigates the impact of feedback protocols on model evaluation and alignment. It trains reward models based on ratings and rankings feedback and evaluates Best-of-n policies. The results show that policies based on rankings feedback outperform the base language model and demonstrate improvement in alignment.
The study uncovers an evaluation inconsistency phenomenon, where the choice of feedback protocol during evaluation seems to favor the alignment algorithm that aligns with the same feedback protocol. This inconsistency extends to evaluations involving AI feedback, indicating a nuanced perception of response quality by annotators under different feedback protocols. These findings highlight the significant implications of feedback acquisition protocols in the alignment pipeline.
In conclusion, the study emphasizes the importance of meticulous data curation within sparse feedback protocols and highlights the potential repercussions of feedback protocol choices on evaluation outcomes. It suggests exploring richer forms of feedback beyond absolute and relative preferences for improved alignment. The study acknowledges its limitations and suggests addressing them to develop more robust and universally applicable alignment methodologies in the field of artificial intelligence.
Source link