Saturday, May 17, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Meet DeepSeek LLMs: A Series of Open-Source AI Models Trained from Scratch on a Vast Dataset of 2 Trillion Tokens in both English and Chinese

January 12, 2024
in AI Technology
Reading Time: 2 mins read
0 0
A A
0
Share on FacebookShare on Twitter



With the rapid advancements in Artificial Intelligence, Large Language Models (LLMs) are constantly improving through ongoing research. These models undergo self-supervised pre-training on large datasets, enabling them to excel in various tasks such as question answering, content generation, text summarization, code completion, etc.

The development of open-source Large Language Models is progressing quickly. However, the existing studies on scaling laws have yielded inconclusive results, creating uncertainty surrounding the efficient scaling of LLMs. To address this challenge, the researchers at DeepSeek AI have released a detailed study on scaling laws, providing insights into the scaling dynamics of large-scale models, particularly in the widely-used open-source 7B and 67B configurations.

The team has introduced the DeepSeek LLM project, an initiative focused on advancing open-source language models based on established scaling rules. To support the pre-training stage, the team has curated a large dataset of 2 trillion tokens, which is continually growing to meet evolving needs. DeepSeek LLM Base models have been trained using Direct Preference Optimization (DPO) and Supervised Fine-Tuning (SFT), resulting in the creation of sophisticated DeepSeek Chat models.

DeepSeek LLM is a sophisticated language model with 67 billion parameters. It has been trained from scratch using a substantial dataset of two trillion tokens in both Chinese and English. Upon evaluation, the team has found that DeepSeek LLM 67B is highly effective. DeepSeek LLM 67B Base outperforms Llama2 70B Base in tasks such as math, reasoning, coding, and Chinese understanding.

DeepSeek LLM 67B Chat has shown exceptional performance in math (GSM8K 0-shot: 84.1, Math 0-shot: 32.6) and coding (HumanEval Pass@1: 73.78). Its impressive score of 65 on the Hungarian National High School Exam demonstrates its strong generalization abilities and its ability to perform well across various tasks and contexts. Compared to GPT-3.5, DeepSeek LLM 67B Chat performs better in open-ended assessments.

The team’s primary contributions can be summarized as follows:

1. Scaling Hyperparameters – Empirical scaling rules have been developed to systematically find ideal values for hyperparameters during training.

2. Model Scale Representation – non-embedding FLOPs or tokens have been introduced as a more accurate representation of model scale, improving the accuracy of scaling-up approaches for large-scale models.

3. Impact of Data Quality – The quality of pre-training data heavily influences the choice of the best scaling-up approach for models or data. Improved data quality necessitates a larger computing budget for model scaling, highlighting the importance of data quality in model development.

In conclusion, this study sheds light on the complexities of scaling laws in the context of Large Language Models. It addresses challenges raised by previous research findings, further advancing the development of open-source language models.

To learn more, please check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you appreciate our work, you will love our newsletter.

Tanya Malhotra is a final year undergraduate student at the University of Petroleum & Energy Studies, Dehradun, pursuing a BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning. She is a Data Science enthusiast with strong analytical and critical thinking skills, and a keen interest in acquiring new skills, leading groups, and managing work in an organized manner.

[Partnership and Promotion on Marktechpost] 🐝 Now you can partner with Marktechpost to promote your Research Paper, Github Repo, and even add your professional commentary to any trending research article on marktechpost.com. Elevate your and your company’s AI research visibility in the tech community…Learn more.



Source link

Tags: ChineseDatasetDeepSeekENGLISHLLMsMeetmodelsOpenSourcescratchSeriesTokensTrainedTrillionVast
Previous Post

Sybil Attack on Blockchain : Impact and Preventive Measures

Next Post

How finops can make the cloud more secure

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
How finops can make the cloud more secure

How finops can make the cloud more secure

Quick Sort in C Guide [With Code]

Quick Sort in C Guide [With Code]

Do Better AI Prompts Translate to Better Outcomes?

Do Better AI Prompts Translate to Better Outcomes?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

May 19, 2024
Part 1: ABAP RESTful Application Programming Model (RAP) – Introduction

Part 1: ABAP RESTful Application Programming Model (RAP) – Introduction

November 20, 2023
Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

December 6, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In