Friday, May 16, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Outperforming larger language models with less training data and smaller model sizes – Google Research Blog

September 25, 2023
in AI Technology
Reading Time: 2 mins read
0 0
A A
0
Share on FacebookShare on Twitter



Posted by Cheng-Yu Hsieh, Scholar Researcher, and Chen-Yu Lee, Analysis Scientist, Cloud AI Group

Massive language fashions (LLMs) have revolutionized the best way we method data-efficient studying. They’ll remedy new duties with zero-shot or few-shot prompting. Nevertheless, deploying LLMs for real-world functions is difficult attributable to their huge measurement. For instance, serving a single 175 billion LLM requires at the least 350GB of GPU reminiscence and specialised infrastructure. Moreover, state-of-the-art LLMs encompass over 500 billion parameters, making them inaccessible for a lot of analysis groups and functions that require low latency efficiency.

To deal with these deployment challenges, practitioners usually decide to deploy smaller specialised fashions. These smaller fashions are skilled utilizing both fine-tuning or distillation. Wonderful-tuning entails updating a pre-trained smaller mannequin (e.g., BERT or T5) utilizing manually-annotated information. Distillation, alternatively, trains smaller fashions utilizing labels generated by a bigger LLM. Sadly, fine-tuning strategies require costly human-generated labels to attain comparable efficiency to LLMs. Distillation, alternatively, requires massive quantities of unlabeled information, which will be troublesome to gather.

In our paper, “Distilling Step-by-Step! Outperforming Bigger Language Fashions with Much less Coaching Information and Smaller Mannequin Sizes,” offered at ACL2023, we suggest a brand new mechanism known as distilling step-by-step. This mechanism permits us to coach smaller task-specific fashions with considerably much less coaching information in comparison with customary fine-tuning or distillation approaches, whereas nonetheless outperforming few-shot prompted LLMs.

The important thing thought behind distilling step-by-step is to extract informative pure language rationales (intermediate reasoning steps) from LLMs. These rationales clarify the connections between enter questions and their corresponding outputs. For instance, when requested, “Jesse’s room is 11 toes lengthy and 15 toes large. If she already has 16 sq. toes of carpet, how way more carpet does she have to cowl the entire flooring?”, an LLM can present intermediate rationales resembling “Space = size * width. Jesse’s room has 11 * 15 sq. toes.” These rationales include activity data that smaller fashions would normally require a considerable amount of information to be taught.

We make the most of these extracted rationales as further supervision to coach small fashions, together with customary activity labels. The distilling step-by-step mechanism consists of two levels. Within the first stage, we use few-shot chain-of-thought (CoT) prompting to extract rationales from LLMs. Within the second stage, we incorporate these rationales within the coaching course of by framing it as a multi-task studying drawback. We prepend activity prefixes to the enter examples to distinguish between the label prediction activity and the rationale technology activity.

In our experiments, we contemplate a 540B PaLM mannequin because the LLM and T5 fashions as task-specific downstream fashions. We conduct experiments on 4 benchmark datasets throughout three NLP duties. Our technique achieves higher efficiency than customary fine-tuning utilizing considerably much less coaching information. For instance, on the e-SNLI dataset, we outperform customary fine-tuning utilizing solely 12.5% of the total dataset. We additionally obtain higher efficiency utilizing a lot smaller mannequin sizes in comparison with few-shot CoT prompted LLMs. On the ANLI dataset, we surpass the efficiency of 540B PaLM utilizing a 770M T5 mannequin, which is over 700X smaller.

In conclusion, distilling step-by-step supplies a brand new paradigm that reduces each the deployed mannequin measurement and the quantity of coaching information required. By extracting rationales from LLMs and incorporating them within the coaching course of, we are able to prepare smaller task-specific fashions that outperform LLMs utilizing much less information. Our technique has the potential to make massive language fashions extra accessible and sensible for real-world functions.



Source link

Tags: BlogdataGooglelanguagelargermodelmodelsOutperformingResearchsizessmallertraining
Previous Post

Posttraumatic brain activity predicts resilience to PTSD

Next Post

Reactivity in Vanilla Javascript

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
Reactivity in Vanilla Javascript

Reactivity in Vanilla Javascript

State of HTML 2023 now open! • Lea Verou

State of HTML 2023 now open! • Lea Verou

Importance of stablecoins and their history

Importance of stablecoins and their history

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

May 19, 2024
How To Build A Quiz App With JavaScript for Beginners

How To Build A Quiz App With JavaScript for Beginners

February 22, 2024
Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

December 6, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In