Saturday, June 28, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Can Large Language Models Learn New Tricks? This Machine Learning Research from Google Introduces ‘CALM’: A Novel Approach for Enhancing AI Capabilities Through Composition

January 9, 2024
in AI Technology
Reading Time: 3 mins read
0 0
A A
0
Share on FacebookShare on Twitter


Large Language Models (LLMs), renowned for their foundational capabilities like commonsense reasoning and coherent language generation, have been fine-tuned for domain-specific tasks such as code generation and mathematical problem-solving. This trend has led to specialized models excelling in specific domains, like code generation or logical reasoning. 

This prompts whether an anchor model can be combined with a domain-specific augmenting model to introduce novel capabilities, such as merging a model’s code understanding prowess with another’s language generation for code-to-text generation. Traditionally, the approach involves further pre-training or fine-tuning the anchor model on data used for training the augmenting model. However, this might need to be more practical due to computational costs. Working with distinct models enables leveraging established capabilities without encountering issues like catastrophic forgetting seen in traditional methods. 

To tackle the obstacles related to training and data limitations outlined earlier, researchers at Google Research and Google DeepMind introduce and explore a pragmatic scenario for model composition: (i) having access to one or multiple augmenting models alongside an anchor model, (ii) being restricted from altering the weights of either model and (iii) having access to a limited dataset representing the combined capabilities of the provided models, such as code generation integrated with intricate logical reasoning.

They propose an innovative framework called Composition to Augment Language Models (CALM) to tackle the general model composition scenario outlined earlier. Unlike superficial augmenting and anchor LMs amalgamations, CALM introduces a small set of trainable parameters within the intermediate layer representations of both augmenting and anchor models. CALM aims to discover an optimal fusion of these models, enhancing their collective performance in handling new complex tasks more effectively than either model operating alone, all the while retaining the distinct capabilities of each model. 

They explore significant practical applications of CALM, focusing on language inclusivity and code generation. In the context of language inclusivity, they leverage a model trained specifically on low-resource languages. They combine this model with the LLM, granting them access to its advanced generation and reasoning abilities, resulting in notably enhanced performance for translation and arithmetic reasoning tasks in low-resource languages.

Interestingly, this composed model surpasses the performance of the two base models and outperforms versions of the LLM that underwent further pre-training or LoRA fine-tuning tailored for low-resource languages. In the case of code generation, they employ a model trained on diverse open-source code across multiple programming languages by integrating this model with the LLM. Hence, harnessing its underlying low-level logic and generation prowess, they achieve superior performance on tasks involving code explanation and completion compared to the performance of the two base models.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 35k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

\"\"

Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…



Source link

Tags: ApproachCALMCapabilitiesCompositionEnhancingGoogleIntroduceslanguageLargeLearnLearningMachinemodelsResearchTricks
Previous Post

Can Large Language Models Retain Old Skills While Learning New Ones? This Paper Introduces LLaMA Pro-8.3B: A New Frontier in AI Adaptability

Next Post

China and cybercriminals are targeting American AI companies

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
China and cybercriminals are targeting American AI companies

China and cybercriminals are targeting American AI companies

Meta says Instagram, Facebook will hide posts about suicide, self-harm and eating disorders from teenagers’ accounts

Meta says Instagram, Facebook will hide posts about suicide, self-harm and eating disorders from teenagers' accounts

Top 10 Smart Contract Security Tools

Top 10 Smart Contract Security Tools

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
How ‘Chain of Thought’ Makes Transformers Smarter

How ‘Chain of Thought’ Makes Transformers Smarter

May 13, 2024
Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

October 2, 2023
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

October 30, 2023
Managing PDFs in Node.js with pdf-lib

Managing PDFs in Node.js with pdf-lib

November 16, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In