In a recent study, a team of researchers from MIT introduced the linear representation hypothesis, which suggests that language models perform calculations by adjusting one-dimensional representations of features in their activation space. According to this theory, these linear characteristics can be used to understand the inner workings of language models. The study has looked into the idea that some language model representations could be multi-dimensional by nature.
In order to tackle this, the team has precisely defined irreducible multi-dimensional features. The incapacity of these features to split down into separate or non-co-occurring lower-dimensional aspects is what distinguishes them. A feature that is truly multi-dimensional cannot be reduced to a smaller one-dimensional component without losing useful information.
The team has created a scalable technique to identify multi-dimensional features in language models using this theoretical framework. Sparse autoencoders, which are neural networks built to develop effective, compressed data representations, have been used in this technique. Sparse autoencoders are used to automatically recognise multi-dimensional features in models such as Mistral 7B and GPT-2.
The team has identified several multidimensional features that are remarkably interpretable. For example, circular representations of the days of the week and the months of the year have been found. These circular properties are especially interesting since they naturally express cyclic patterns, which makes them useful for calendar-related tasks involving modular arithmetic, such as figuring out the day of the week for a given date.
Studies on the Mistral 7B and Llama 3 8B models have been performed to further validate the results. For tasks involving days of the week and months of the year, these trials have shown that the circular features found were crucial to the computational processes of the models. The changes in the models’ performance on pertinent tasks could be seen by adjusting these variables, indicating their crucial relevance.
The team has summarized their primary contributions as follows.
– Multi-dimensional language model characteristics have been defined in addition to one-dimensional ones. An updated superposition theory has been proposed to explain these multi-dimensional characteristics.
– The team has analysed how employing multi-dimensional features reduces the representation space of the model. A test has been created to identify irreducible features that are both empirically feasible and theoretically supported.
– An automated method has been introduced to discover multi-dimensional features using sparse autoencoders. Multi-dimensional representations in GPT-2 and Mistral 7B, such as circular representations for the days of the week and months of the year, can be found using this method. It is the first time that emergent circular representations have been discovered in a big language model.
– Two challenges have been suggested that involve modular addition in terms of months of the year and days of the week, assuming that these circular representations will be used by the models for these tasks. Mistral 7B and Llama 3 8B intervention tests have demonstrated that models employ circular representations.
In conclusion, this research shows that certain language model representations are multi-dimensional by nature, which calls into question the linear representation theory. This study contributes to a better understanding of the intricate internal structures that allow language models to accomplish a wide range of tasks by creating a technique to identify these features and verify their significance through experiments.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter..
Don’t Forget to join our 43k+ ML SubReddit
Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning. She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.
[Free AI Webinar] ‘How to Build Personalized Marketing Chatbots (Gemini vs LoRA)’.
Source link