Friday, May 16, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Gradformer: A Machine Learning Method that Integrates Graph Transformers (GTs) with the Intrinsic Inductive Bias by Applying an Exponential Decay Mask to the Attention Matrix

April 30, 2024
in AI Technology
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter


Graph Transformers (GTs) have successfully achieved state-of-the-art performance on various platforms. GTs can capture long-range information from nodes that are at large distances, unlike the local message-passing in graph neural networks (GNNs). In addition, the self-attention mechanism in GTs permits each node to look at other nodes in a graph directly, helping collect information from arbitrary nodes. The same self-attention in GTs also provides much flexibility and capacity to collect information globally and adaptively.

Despite being advantageous over a large variety of tasks, the self-attention mechanism in GTs doesn’t pay more attention to the special features of graphs, such as biases related to structure. Although some methods that account for these features leverage positional encoding and attention bias model inductive biases, they are ineffective in overcoming this problem. Also, the self-attention mechanism doesn’t utilize the full advantage of intrinsic feature biases in graphs, which creates critical challenges in capturing the essential graph structural information. Neglecting structural correlation can lead to an equal focus on each node by the mechanism, creating an inadequate focus on key information and the aggregation of redundant information.

Researchers from Wuhan University China, JD Explore Academy China, The University of Melbourne, and Griffith University, Brisbane, proposed Gradformer, a novel method that innovatively integrates GTs with inductive bias. Gradformer includes a special feature called exponential decay mask into the GT self-attention architecture. This approach helps to control each node’s attention weights relative to other nodes by multiplying the mask with the attention score. The gradual reduction in attention weights due to exponential decay helps the decay mask effectively guide the learning process within the self-attention framework.

Gradformer achieves state-of-the-art results on five datasets, highlighting the efficiency of this proposed method. When tested on small datasets like NC11 and PROTEINS, it outperforms all 14 methods with improvements of 2.13% and 2.28%, respectively. This shows that Gradformer effectively incorporates inductive biases into the GT model, which becomes important if available data is limited. Moreover, it performs well on big datasets such as ZINC, which shows that it applies to datasets of different sizes.

Researchers performed an efficiency analysis on Gradformer and compared its training cost with other important methods like SAN, Graphormer, and GraphGPS, mostly focusing on parameters such as GPU memory usage and time. The results obtained from the comparison demonstrated that Gradformer can balance efficiency and accuracy optimally, outperforming SAN and GraphGPS in computational efficiency and accuracy. Further, despite having a longer runtime than Graphormer, it outperforms Graphormer in accuracy, showing its superiority in resource usage with good performance results.

In conclusion, researchers proposed Gradformer, a novel integration of GT with intrinsic inductive biases, achieved by applying an exponential decay mask with learnable parameters to the attention matrix. Gradformer outperforms 14 methods of GTs and GNNs with improvements of 2.13% and 2.28%, respectively. Gradformer excels in its capacity to maintain or even exceed the accuracy of shallow models while incorporating deeper network architectures. Future work on Gradformer includes (a) exploring the feasibility of achieving a state-of-the-art structure without using MPNN and (b) investigating the capability of the decay mask operation to improve GT efficiency.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter.

Don’t Forget to join our 40k+ ML SubReddit

Sajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…



Source link

Tags: ApplyingAttentionBiasDecayexponentialGradformergraphGTsInductiveIntegratesIntrinsicLearningMachinemaskMatrixMethodTransformers
Previous Post

Arnaud Lagardere to resign from executive roles due to court indictment By Reuters

Next Post

Cryptocurrency Investment Strategies for Beginners

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
Cryptocurrency Investment Strategies for Beginners

Cryptocurrency Investment Strategies for Beginners

Why Coding Should Be a Part of Every Child’s Education

Why Coding Should Be a Part of Every Child's Education

Managed Detection and Response (MDR) Enhances Cybersecurity

Managed Detection and Response (MDR) Enhances Cybersecurity

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

May 19, 2024
How To Build A Quiz App With JavaScript for Beginners

How To Build A Quiz App With JavaScript for Beginners

February 22, 2024
Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

December 6, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In