Thursday, May 8, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

How do neural networks learn? A mathematical formula explains how they detect relevant patterns

March 12, 2024
in AI Technology
Reading Time: 3 mins read
0 0
A A
0
Share on FacebookShare on Twitter


Neural networks have been driving advancements in artificial intelligence, powering large language models used in various fields like finance, human resources, and healthcare. However, these networks are still a mystery to engineers and scientists trying to grasp their inner workings. A team of data and computer scientists at the University of California San Diego has now provided neural networks with a way to uncover their learning process, similar to an X-ray.

The researchers discovered that a statistical formula offers a concise mathematical explanation of how neural networks, such as GPT-2, which precedes ChatGPT, learn important data patterns called features. This formula also clarifies how these networks utilize these crucial patterns to make predictions.

“We aim to comprehend neural networks from their fundamental principles,” explained Daniel Beaglehole, a Ph.D. student at UC San Diego’s Department of Computer Science and Engineering and co-first author of the study. “With our formula, it becomes easier to interpret the features used by the network for predictions.”

The team shared their findings in the March 7th issue of the journal Science.

Why is this significant? AI-driven tools are now ubiquitous in everyday life, from banks using them for loan approvals to hospitals analyzing medical data like X-rays and MRIs. However, understanding how neural networks make decisions and the potential biases in their training data remains challenging.

“Without understanding how neural networks learn, it’s hard to determine their reliability, accuracy, and appropriateness in responses,” noted Mikhail Belkin, the corresponding author of the paper and a professor at the UC San Diego Halicioglu Data Science Institute. “This is especially critical given the rapid growth of machine learning and neural net technology.”

This study is part of a broader initiative in Belkin’s research group to create a mathematical theory explaining how neural networks operate. “Technology has surpassed theory by a significant margin,” he mentioned. “We need to catch up.”

The team also demonstrated that the statistical formula they employed to grasp neural network learning, known as Average Gradient Outer Product (AGOP), could enhance performance and efficiency in other machine learning architectures without neural networks.

“Understanding the mechanisms behind neural networks should enable the development of simpler, more efficient, and more interpretable machine learning models,” Belkin added. “We hope this will promote the democratization of AI.”

Belkin envisions machine learning systems that require less computational power, making them more energy-efficient and easier to comprehend.

Illustrating the new findings with an example

Neural networks are computational tools that learn relationships between data characteristics, such as identifying objects or faces in images. For instance, one task may involve determining if a person in an image is wearing glasses. By providing the network with many labeled training images, it learns the relationship between images and labels, focusing on relevant features to make determinations. Understanding these features can demystify the black box nature of AI systems.

Feature learning involves recognizing relevant patterns in data to make predictions. In the glasses example, the network focuses on features like the upper part of the face where glasses typically sit. The researchers in the Science paper identified a statistical formula explaining how neural networks learn these features.

Exploring alternative neural network architectures, the researchers demonstrated that integrating this formula into non-neural network systems improved learning efficiency.

“Machines, like humans, learn to ignore unnecessary information. Large Language Models implement this ‘selective paying attention,’ and our study sheds light on how neural networks accomplish this,” Belkin explained.

The study was supported by the National Science Foundation and the Simons Foundation, with Belkin being part of NSF and UC San Diego’s TILOS project.



Source link

Tags: detectExplainsformulaLearnMathematicalnetworksNeuralNeural Interfaces; Computers and Internet; Distributed Computing; Computer Programming; Educational Technology; Information Technology; Computer Modeling; CommunicationsPatternsRelevant
Previous Post

Patterson Companies announces $500 million stock repurchase By Investing.com

Next Post

Training Value Functions via Classification for Scalable Deep Reinforcement Learning: Study by Google DeepMind Researchers and Others

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
Training Value Functions via Classification for Scalable Deep Reinforcement Learning: Study by Google DeepMind Researchers and Others

Training Value Functions via Classification for Scalable Deep Reinforcement Learning: Study by Google DeepMind Researchers and Others

4 strategies to optimize costs of large language model deployment

4 strategies to optimize costs of large language model deployment

Finding the Right Balance for Global Expansion

Finding the Right Balance for Global Expansion

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

May 19, 2024
A faster, better way to prevent an AI chatbot from giving toxic responses | MIT News

A faster, better way to prevent an AI chatbot from giving toxic responses | MIT News

April 10, 2024
Part 1: ABAP RESTful Application Programming Model (RAP) – Introduction

Part 1: ABAP RESTful Application Programming Model (RAP) – Introduction

November 20, 2023
Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

December 6, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In