Saturday, May 24, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Researchers from NVIDIA Introduce Retro 48B: The Largest LLM Pretrained with Retrieval before Instruction Tuning

October 18, 2023
in AI Technology
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter


Researchers from Nvidia and the University of Illinois at Urbana Champaign introduce Retro 48B, a significantly larger language model than previous retrieval-augmented models like Retro (7.5B parameters). Retro 48B is pre-trained with retrieval on an extensive corpus, leading to improved perplexity. The encoder in InstructRetro can be ablated, suggesting that continued retrieval-augmented pre-training enhances the decoder’s performance in question answering.

Retrieval-augmented language models are well-established for open-domain question answering, benefiting both during pre-training and inference. Their approach reduces model perplexity, improves factuality, and enhances task performance post-fine-tuning. Existing retrieval-augmented models are constrained in size compared to decoder-only models, limiting their zero-shot generalization potential after instruction tuning. Instruction tuning, vital for natural language understanding, has gained support from high-quality datasets like FLAN, OpenAssistant, and Dolly, enabling superior performance in chat and question-answering tasks.

Pretraining language models with retrieval, such as Retro, has shown promise in reducing perplexity and enhancing factual accuracy. However, existing retrieval-augmented models need more parameters and training data, impacting their performance in instruction tuning and other tasks typical of large language models. Their study introduces Retro 48B, the largest retrieval-augmented model, continuing to pretrain a 43B GPT model with additional tokens. InstructRetro, obtained from this process, significantly improves zero-shot question answering compared to traditional GPT models. InstructRetro’s decoder achieves similar results when the encoder is ablated, demonstrating the retrieval-augmented pre-training’s effectiveness in context incorporation for question answering.

Their study explores an extensive process involving pretraining a GPT model to create Retro 48B, instructing it to enhance its zero-shot question-answering abilities, and evaluating its performance in various tasks. It introduces a novel 48B-sized retrieval-augmented language model, InstructRetro, which significantly outperforms the standard GPT model in zero-shot question-answering tasks after instruction tuning. This scaling-up approach demonstrates the potential of larger retrieval-augmented models in natural language understanding.

Retro 48B, a language model pre-trained with retrieval, surpasses the original GPT model in perplexity. After instruction tuning, referred to as InstructRetro, it significantly enhances zero-shot question answering, with an average improvement of 7% on short-form and 10% on long-form QA tasks compared to its GPT counterpart. Surprisingly, InstructRetro’s decoder backbone alone delivers comparable results, indicating the effectiveness of retrieval-based pretraining in context incorporation for QA.

Introducing InstructRetro 48B, the largest retrieval-augmented language model, significantly enhances zero-shot accuracy in a wide range of open-ended QA tasks compared to its GPT counterpart. Pretraining with retrieval using the Retro augmentation method improved perplexity. Their study’s results suggest that continued pre-training with recovery before instruction tuning offers a promising direction for enhancing GPT decoders in QA. Surprisingly, the decoder achieves comparable accuracy, showcasing the effectiveness of pretraining for context incorporation. InstructRetro excels in long-form QA tasks, highlighting retrieval-augmented pretraining’s potential for challenging tasks.

Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on WhatsApp. Join our AI Channel on Whatsapp..

\"\"

Hello, My name is Adnan Hassan. I am a consulting intern at Marktechpost and soon to be a management trainee at American Express. I am currently pursuing a dual degree at the Indian Institute of Technology, Kharagpur. I am passionate about technology and want to create new products that make a difference.

▶️ Now Watch AI Research Updates On Our Youtube Channel [Watch Now]



Source link

Tags: 48BInstructionIntroduceLargestLLMNVIDIAPretrainedResearchersretrievalRetroTuning
Previous Post

Talking Tech: AI, China and Wall Street

Next Post

Latest trends in digital marketing 2022

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
Latest trends in digital marketing 2022

Latest trends in digital marketing 2022

Top 5 Programming languages

Top 5 Programming languages

Israel-Hamas war: ‘Refuse to donate another dollar,’ say major University Pennsylvania donors; here’s what happened

Israel-Hamas war: 'Refuse to donate another dollar,' say major University Pennsylvania donors; here's what happened

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
A faster, better way to prevent an AI chatbot from giving toxic responses | MIT News

A faster, better way to prevent an AI chatbot from giving toxic responses | MIT News

April 10, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
Implementing User Authentication in React Apps with Appwrite — SitePoint

Implementing User Authentication in React Apps with Appwrite — SitePoint

January 30, 2024
NousResearch Released Nous-Hermes-2-Mixtral-8x7B: An Open-Source LLM with SFT and DPO Versions

NousResearch Released Nous-Hermes-2-Mixtral-8x7B: An Open-Source LLM with SFT and DPO Versions

January 25, 2024
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In