Saturday, May 17, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Researchers at Google AI Present a Machine Learning-based Approach to Teach Powerful LLMs How to Better Reason with Graph Information

March 19, 2024
in Data Science & ML
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter


Picture everything in your immediate vicinity, from your friends and family to the utensils in your kitchen and the components of your bicycle. Every one of them is related in some way. The word “graph” describes the relationships between entities in computer science. Nodes are the objects in a graph, whereas edges are the links between them that show their relationship. The very structure of the internet is a vast network of interconnected web pages. The information that search engines rely on is also structured like a graph.

A new Google study aims to train powerful LLMs to reason better with graph information. This is done since graphs are ubiquitous and LLM technology is advancing. While LLMs are often educated on ordinary text, graphs provide a more effective means of organizing information. The objective is to try several approaches to find the most effective ones and get real-world knowledge. Converting graphics into language that LLMs can comprehend is extremely intricate. The intricacy of multi-node graph structures with complex webs of edges connecting them is the root of the problem. This research focuses on methods for converting graphs into a language that LLMs can comprehend.

The researchers first created a benchmark named GraphQA to rigorously determine the optimum method for graph-to-text translation. The researchers rely on a single graph type to build an exhaustive and realistic LLM test; rather, they employ a variety of graphs to guarantee a large number of connections. Certain graph types make these kinds of problems easier or harder to solve. In this approach, GraphQA can reveal biases in an LLM’s analysis of the graphs, and the test becomes more representative of the real-world environment that LLMs may encounter.

Graph QA is concerned with elementary graph operations, such as verifying the existence of an edge, counting the number of edges or nodes, determining which nodes are connected to a given node, and detecting cycles in a graph. Despite their apparent simplicity, these activities necessitate familiarity with the connections between nodes and edges. To teach models how to evaluate graphs efficiently, GraphQA covers a wide range of tasks, from finding patterns to making new connections. More advanced reasoning on graphs, such as discovering communities or determining prominent nodes, relies on these foundational operations. In addition, GraphQA encompasses generating random graphs through several algorithms such as Erdős-Rényi, scale-free networks, the Barabasi-Albert model, and the stochastic block model. It also involves generating simpler graph structures such as routes, full graphs, and star graphs, offering varied data collection for training.

The team investigated various approaches to converting graphs into text that LLMs can process. They conducted three important experiments: one to evaluate LLMs’ performance on graph tasks and two to learn about the effects of LLM size and graph shape on performance. All of their experiments are conducted on GraphQA.

They evaluated the performance of pre-trained LLMs on graph tasks such as cycle detection, node degree estimation, and connection identification. The findings showed that a lot depends on encoding: There is a strong relationship between the graph’s textual representation and LLM performance. In a broad sense, the “incident” encoding performed exceptionally well across the board.

The team conducted this experiment to determine whether LLM performance improves with increasing LLM size (parameter count). To achieve this, they ran the identical graph jobs on four different PaLM 2 sizes: XXS, XS, S, and L. The findings are summarized here:

When it came to graph reasoning tasks, larger models often performed better. The additional parameters seemed to allow them to learn more intricate patterns.

Interestingly, the “edge existence” job, which involves determining whether two nodes in a graph are related, was less affected by size.

When it came to the cycle check problem—determining whether a graph has a cycle—not even the largest LLM could reliably outperform a basic baseline solution. This demonstrates the potential for LLMs to excel in specific graph tasks.

The researchers also explored whether LLMs’ problem-solving abilities on a given graph are affected by its “shape”—the connections between its nodes. The study shows that the structure of graphs significantly affects LLM performance. For instance, LLMs performed admirably on graphs with many closely linked edges (where cycles are abundant) but poorly on path graphs (where cycles never occur) in an exercise testing for the existence of cycles. It was interesting to see how offering a few different instances helped it adjust. For cycle checks, for instance, they included both cycle-containing and cycle-free instances as few-shots in the prompt.

Findings from this research provide light on the best practices for preparing graphics for LLMs. With the correct encoding methods, an LLM can enhance its accuracy on graph issues by a factor of five to sixty-plus. The researchers hope their new benchmark, GraphQA, will encourage more studies in this field.

Check out the Paper and Blog. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter.

Don’t Forget to join our 38k+ ML SubReddit

Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…



Source link

Tags: ApproachGooglegraphInformationLearningBasedLLMsMachinePowerfulPresentReasonResearchersTeach
Previous Post

Amazon Pay, Google Pay, card payments dominate; 58% e-commerce payments alternative methods

Next Post

A change in the machine learning landscape

Related Posts

AI Compared: Which Assistant Is the Best?
Data Science & ML

AI Compared: Which Assistant Is the Best?

June 10, 2024
5 Machine Learning Models Explained in 5 Minutes
Data Science & ML

5 Machine Learning Models Explained in 5 Minutes

June 7, 2024
Cohere Picks Enterprise AI Needs Over ‘Abstract Concepts Like AGI’
Data Science & ML

Cohere Picks Enterprise AI Needs Over ‘Abstract Concepts Like AGI’

June 7, 2024
How to Learn Data Analytics – Dataquest
Data Science & ML

How to Learn Data Analytics – Dataquest

June 6, 2024
Adobe Terms Of Service Update Privacy Concerns
Data Science & ML

Adobe Terms Of Service Update Privacy Concerns

June 6, 2024
Build RAG applications using Jina Embeddings v2 on Amazon SageMaker JumpStart
Data Science & ML

Build RAG applications using Jina Embeddings v2 on Amazon SageMaker JumpStart

June 6, 2024
Next Post
A change in the machine learning landscape

A change in the machine learning landscape

CoinLedger Joins Forces with MetaMask for Streamlined Crypto Tax Reporting

CoinLedger Joins Forces with MetaMask for Streamlined Crypto Tax Reporting

Leumi posts NIS 7b 2023 profit despite large US write-down

Leumi posts NIS 7b 2023 profit despite large US write-down

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

May 19, 2024
Part 1: ABAP RESTful Application Programming Model (RAP) – Introduction

Part 1: ABAP RESTful Application Programming Model (RAP) – Introduction

November 20, 2023
Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

December 6, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In