In recent years, there has been an enormous development in pre-trained large language models (LLMs). These LLMs are trained to predict the next token given the previous tokens and provide a suitable prompt. They can solve various natural language processing (NLP) tasks. However, the next-token prediction objective deviates from the fundamental aim of “outputting contents that humans prefer.”
To address this gap, Reinforcement Learning from Human Feedback (RLHF) is introduced as a pipeline to collect pair-wise human preferences, train a reward model (RM) to model these preferences, and use Reinforcement Learning (RL) to create a model that outputs contents that humans prefer. It has proven challenging to reproduce OpenAI’s RLHF pipeline in the open-source community for several reasons:
- RL and RLHF have many subtle implementation details that can significantly impact training stability.
- The models are challenging to evaluate for tasks like assessing the quality of 800 lines of generated code snippets for a coding task.
- They take a long time to train and iterate.
Hugging Face, Mila, and Fuxi AI lab researchers have undertaken a unique approach, presenting a high-precision reproduction of the Reinforcement Learning from Human Feedback (RLHF) scaling behaviors reported in OpenAI’s seminal TL;DR summarization work. They meticulously created an RLHF pipeline, focusing on over 20 key implementation details. They adopted a unified learning rate for SFT, RM, and PPO training to enhance reproducibility.
They used the transformers library’s implementation of the Pythia models in conjunction with deepspeed’s ZeRO Stage 2 to help fit the models into the GPU memory; for 6.9B PPO training, they also transferred the reference policy and reward model to the CPU. The dropout layers were turned off during training to ensure reproducibility.
The PPO implementation optimizes the RLHF objective, leading to a significant increase in the score total. Their best 6.9B model is preferred by GPT nearly 80% of the time, demonstrating its practical superiority. For their 1B-sized model, the average preference consistency in multiple random experiments is close to 0.4, indicating a different set of preferences captured by the 1B model. PPO models outperform SFT models across all summary lengths, further reinforcing the practical relevance of the research.
In conclusion, Mila and Fuxi AI lab researchers have successfully reproduced the RLHF scaling behaviors reported in OpenAI’s seminal TL;DR summarization work with high precision. Their RLHF-trained Pythia models have demonstrated significant gains in response quality that scale with model size. Notably, their 2.8B and 6.9B models have outperformed OpenAI’s released 1.3B checkpoint, underscoring the importance of model size in achieving superior results.
Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter. Don’t Forget to join our 39k+ ML SubReddit.