Thursday, May 8, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

In-Context Learning Capabilities of Multi-Layer Perceptrons MLPs: A Comparative Study with Transformers

May 31, 2024
in AI Technology
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter


Recent years have witnessed significant advancements in neural language models, especially Large Language Models (LLMs) powered by the Transformer architecture and increased scale. LLMs excel in various tasks such as generating grammatically correct text, answering questions, summarizing content, producing creative outputs, and solving intricate puzzles. A notable feature is in-context learning (ICL), where the model can accurately respond to new task examples during inference without updating weights. This ability is often associated with Transformers and their attention-based mechanisms.

Studies have demonstrated ICL with Transformers in linear regression tasks, showcasing the model’s ability to generalize to new input-label pairs in-context. Transformers achieve this by potentially implementing gradient descent or emulating least-squares regression. They strike a balance between in-weight learning (IWL) and ICL, with diverse datasets enhancing their ICL capabilities. While most research focuses on Transformers, some studies explore recurrent neural networks (RNNs) and LSTMs, albeit with mixed results. Recent findings also highlight various causal sequence models and state space models achieving ICL. However, the potential of Multilayer Perceptrons (MLPs) for ICL remains underexplored despite their resurgence in complex tasks, spurred by the introduction of the MLP-Mixer model.

✅ [Featured Article] LLMWare.ai Selected for 2024 GitHub Accelerator: Enabling the Next Wave of Innovation in Enterprise RAG with Small Specialized Language Models

In a study conducted by researchers from Harvard, it was demonstrated that MLPs can effectively learn in-context. MLPs and MLP-Mixer models perform competitively with Transformers on ICL tasks within the same computational budget. Particularly, MLPs outperform Transformers in relational reasoning ICL tasks, challenging the notion that ICL is exclusive to Transformers. This success suggests the exploration of architectures beyond attention-based ones and indicates that Transformers, limited by self-attention and positional encodings, may exhibit biases towards certain task structures compared to MLPs.

The study delves into MLPs’ behavior in ICL through two tasks: in-context regression and in-context classification. For ICL regression, the input consists of a sequence of linearly related value pairs (xi, yi) with varying weights β, added noise, and a query xq. The model predicts the corresponding yq by inferring β from the context exemplars. In ICL classification, the input comprises a sequence of exemplars (xi, yi) followed by a query xq sampled from a Gaussian mixture model. The model predicts the correct label for xq by referencing the context exemplars, taking into account data diversity and burstiness (number of repeats per cluster in the context).

Comparisons were made between MLPs and Transformers on in-context regression and classification tasks. Both architectures, including MLP-Mixers, achieved near-optimal mean squared error (MSE) with sufficient computing resources, although Transformers slightly outperformed MLPs with smaller budgets. For longer context lengths, vanilla MLPs performed less effectively, while MLP-Mixers maintained optimal MSE. As data diversity increased, all models transitioned from IWL to ICL, with Transformers undergoing the transition more rapidly. In in-context classification, MLPs performed on par with Transformers, maintaining a relatively consistent loss across context lengths and transitioning from IWL to ICL with increased data diversity.


In this research, Harvard researchers compare MLPs and Transformers on in-context regression and classification tasks. All architectures, including MLP-Mixers, achieved near-optimal MSE with sufficient compute, although Transformers slightly outperformed MLPs with smaller compute budgets. Vanilla MLPs performed worse with longer context lengths, while MLP-Mixers maintained optimal MSE. As data diversity increased, all models transitioned from IWL to ICL, with Transformers making the transition more quickly. In in-context classification, MLPs performed comparably to Transformers, maintaining flat loss across context lengths and transitioning from IWL to ICL as data diversity increased.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 43k+ ML SubReddit | Also, check out our AI Events Platform

Asjad is an intern consultant at Marktechpost. He is pursuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.

Welcome to our website

Thank you for visiting our site. We hope you find the information you are looking for.



Source link

Tags: CapabilitiesComparativeincontextLearningMLPsMultiLayerPerceptronsstudyTransformers
Previous Post

Embedding Pose Graph, Enabling 3D Foundation Model Capabilities with a Compact Representation

Next Post

Guess?, Inc. (GES) Q1 2025 Earnings Call Transcript

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
Guess?, Inc. (GES) Q1 2025 Earnings Call Transcript

Guess?, Inc. (GES) Q1 2025 Earnings Call Transcript

Google Admits Its AI Overviews Search Feature Screwed Up

Google Admits Its AI Overviews Search Feature Screwed Up

SignLLM: A Multilingual Sign Language Model that can Generate Sign Language Gestures from Input Text

SignLLM: A Multilingual Sign Language Model that can Generate Sign Language Gestures from Input Text

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

May 19, 2024
A faster, better way to prevent an AI chatbot from giving toxic responses | MIT News

A faster, better way to prevent an AI chatbot from giving toxic responses | MIT News

April 10, 2024
Part 1: ABAP RESTful Application Programming Model (RAP) – Introduction

Part 1: ABAP RESTful Application Programming Model (RAP) – Introduction

November 20, 2023
Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

December 6, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In