With the rapid advancements in Artificial Intelligence, Large Language Models (LLMs) are constantly improving through ongoing research. These models undergo self-supervised pre-training on large datasets, enabling them to excel in various tasks such as question answering, content generation, text summarization, code completion, etc.
The development of open-source Large Language Models is progressing quickly. However, the existing studies on scaling laws have yielded inconclusive results, creating uncertainty surrounding the efficient scaling of LLMs. To address this challenge, the researchers at DeepSeek AI have released a detailed study on scaling laws, providing insights into the scaling dynamics of large-scale models, particularly in the widely-used open-source 7B and 67B configurations.
The team has introduced the DeepSeek LLM project, an initiative focused on advancing open-source language models based on established scaling rules. To support the pre-training stage, the team has curated a large dataset of 2 trillion tokens, which is continually growing to meet evolving needs. DeepSeek LLM Base models have been trained using Direct Preference Optimization (DPO) and Supervised Fine-Tuning (SFT), resulting in the creation of sophisticated DeepSeek Chat models.
DeepSeek LLM is a sophisticated language model with 67 billion parameters. It has been trained from scratch using a substantial dataset of two trillion tokens in both Chinese and English. Upon evaluation, the team has found that DeepSeek LLM 67B is highly effective. DeepSeek LLM 67B Base outperforms Llama2 70B Base in tasks such as math, reasoning, coding, and Chinese understanding.
DeepSeek LLM 67B Chat has shown exceptional performance in math (GSM8K 0-shot: 84.1, Math 0-shot: 32.6) and coding (HumanEval Pass@1: 73.78). Its impressive score of 65 on the Hungarian National High School Exam demonstrates its strong generalization abilities and its ability to perform well across various tasks and contexts. Compared to GPT-3.5, DeepSeek LLM 67B Chat performs better in open-ended assessments.
The team’s primary contributions can be summarized as follows:
1. Scaling Hyperparameters – Empirical scaling rules have been developed to systematically find ideal values for hyperparameters during training.
2. Model Scale Representation – non-embedding FLOPs or tokens have been introduced as a more accurate representation of model scale, improving the accuracy of scaling-up approaches for large-scale models.
3. Impact of Data Quality – The quality of pre-training data heavily influences the choice of the best scaling-up approach for models or data. Improved data quality necessitates a larger computing budget for model scaling, highlighting the importance of data quality in model development.
In conclusion, this study sheds light on the complexities of scaling laws in the context of Large Language Models. It addresses challenges raised by previous research findings, further advancing the development of open-source language models.
To learn more, please check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.
If you appreciate our work, you will love our newsletter.
Tanya Malhotra is a final year undergraduate student at the University of Petroleum & Energy Studies, Dehradun, pursuing a BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning. She is a Data Science enthusiast with strong analytical and critical thinking skills, and a keen interest in acquiring new skills, leading groups, and managing work in an organized manner.
[Partnership and Promotion on Marktechpost] 🐝 Now you can partner with Marktechpost to promote your Research Paper, Github Repo, and even add your professional commentary to any trending research article on marktechpost.com. Elevate your and your company’s AI research visibility in the tech community…Learn more.
Source link