Recent years have witnessed significant advancements in neural language models, especially Large Language Models (LLMs) powered by the Transformer architecture and increased scale. LLMs excel in various tasks such as generating grammatically correct text, answering questions, summarizing content, producing creative outputs, and solving intricate puzzles. A notable feature is in-context learning (ICL), where the model can accurately respond to new task examples during inference without updating weights. This ability is often associated with Transformers and their attention-based mechanisms.
Studies have demonstrated ICL with Transformers in linear regression tasks, showcasing the model’s ability to generalize to new input-label pairs in-context. Transformers achieve this by potentially implementing gradient descent or emulating least-squares regression. They strike a balance between in-weight learning (IWL) and ICL, with diverse datasets enhancing their ICL capabilities. While most research focuses on Transformers, some studies explore recurrent neural networks (RNNs) and LSTMs, albeit with mixed results. Recent findings also highlight various causal sequence models and state space models achieving ICL. However, the potential of Multilayer Perceptrons (MLPs) for ICL remains underexplored despite their resurgence in complex tasks, spurred by the introduction of the MLP-Mixer model.
In a study conducted by researchers from Harvard, it was demonstrated that MLPs can effectively learn in-context. MLPs and MLP-Mixer models perform competitively with Transformers on ICL tasks within the same computational budget. Particularly, MLPs outperform Transformers in relational reasoning ICL tasks, challenging the notion that ICL is exclusive to Transformers. This success suggests the exploration of architectures beyond attention-based ones and indicates that Transformers, limited by self-attention and positional encodings, may exhibit biases towards certain task structures compared to MLPs.
The study delves into MLPs’ behavior in ICL through two tasks: in-context regression and in-context classification. For ICL regression, the input consists of a sequence of linearly related value pairs (xi, yi) with varying weights β, added noise, and a query xq. The model predicts the corresponding yq by inferring β from the context exemplars. In ICL classification, the input comprises a sequence of exemplars (xi, yi) followed by a query xq sampled from a Gaussian mixture model. The model predicts the correct label for xq by referencing the context exemplars, taking into account data diversity and burstiness (number of repeats per cluster in the context).
Comparisons were made between MLPs and Transformers on in-context regression and classification tasks. Both architectures, including MLP-Mixers, achieved near-optimal mean squared error (MSE) with sufficient computing resources, although Transformers slightly outperformed MLPs with smaller budgets. For longer context lengths, vanilla MLPs performed less effectively, while MLP-Mixers maintained optimal MSE. As data diversity increased, all models transitioned from IWL to ICL, with Transformers undergoing the transition more rapidly. In in-context classification, MLPs performed on par with Transformers, maintaining a relatively consistent loss across context lengths and transitioning from IWL to ICL with increased data diversity.
In this research, Harvard researchers compare MLPs and Transformers on in-context regression and classification tasks. All architectures, including MLP-Mixers, achieved near-optimal MSE with sufficient compute, although Transformers slightly outperformed MLPs with smaller compute budgets. Vanilla MLPs performed worse with longer context lengths, while MLP-Mixers maintained optimal MSE. As data diversity increased, all models transitioned from IWL to ICL, with Transformers making the transition more quickly. In in-context classification, MLPs performed comparably to Transformers, maintaining flat loss across context lengths and transitioning from IWL to ICL as data diversity increased.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter..
Don’t Forget to join our 43k+ ML SubReddit | Also, check out our AI Events Platform
Asjad is an intern consultant at Marktechpost. He is pursuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.
Welcome to our website
Thank you for visiting our site. We hope you find the information you are looking for.