Neural networks have been driving advancements in artificial intelligence, powering large language models used in various fields like finance, human resources, and healthcare. However, these networks are still a mystery to engineers and scientists trying to grasp their inner workings. A team of data and computer scientists at the University of California San Diego has now provided neural networks with a way to uncover their learning process, similar to an X-ray.
The researchers discovered that a statistical formula offers a concise mathematical explanation of how neural networks, such as GPT-2, which precedes ChatGPT, learn important data patterns called features. This formula also clarifies how these networks utilize these crucial patterns to make predictions.
“We aim to comprehend neural networks from their fundamental principles,” explained Daniel Beaglehole, a Ph.D. student at UC San Diego’s Department of Computer Science and Engineering and co-first author of the study. “With our formula, it becomes easier to interpret the features used by the network for predictions.”
The team shared their findings in the March 7th issue of the journal Science.
Why is this significant? AI-driven tools are now ubiquitous in everyday life, from banks using them for loan approvals to hospitals analyzing medical data like X-rays and MRIs. However, understanding how neural networks make decisions and the potential biases in their training data remains challenging.
“Without understanding how neural networks learn, it’s hard to determine their reliability, accuracy, and appropriateness in responses,” noted Mikhail Belkin, the corresponding author of the paper and a professor at the UC San Diego Halicioglu Data Science Institute. “This is especially critical given the rapid growth of machine learning and neural net technology.”
This study is part of a broader initiative in Belkin’s research group to create a mathematical theory explaining how neural networks operate. “Technology has surpassed theory by a significant margin,” he mentioned. “We need to catch up.”
The team also demonstrated that the statistical formula they employed to grasp neural network learning, known as Average Gradient Outer Product (AGOP), could enhance performance and efficiency in other machine learning architectures without neural networks.
“Understanding the mechanisms behind neural networks should enable the development of simpler, more efficient, and more interpretable machine learning models,” Belkin added. “We hope this will promote the democratization of AI.”
Belkin envisions machine learning systems that require less computational power, making them more energy-efficient and easier to comprehend.
Illustrating the new findings with an example
Neural networks are computational tools that learn relationships between data characteristics, such as identifying objects or faces in images. For instance, one task may involve determining if a person in an image is wearing glasses. By providing the network with many labeled training images, it learns the relationship between images and labels, focusing on relevant features to make determinations. Understanding these features can demystify the black box nature of AI systems.
Feature learning involves recognizing relevant patterns in data to make predictions. In the glasses example, the network focuses on features like the upper part of the face where glasses typically sit. The researchers in the Science paper identified a statistical formula explaining how neural networks learn these features.
Exploring alternative neural network architectures, the researchers demonstrated that integrating this formula into non-neural network systems improved learning efficiency.
“Machines, like humans, learn to ignore unnecessary information. Large Language Models implement this ‘selective paying attention,’ and our study sheds light on how neural networks accomplish this,” Belkin explained.
The study was supported by the National Science Foundation and the Simons Foundation, with Belkin being part of NSF and UC San Diego’s TILOS project.