Friday, May 16, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Meta AI Introduces CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution

January 13, 2024
in AI Technology
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter


There has been a significant surge in the integration of language models (LMs) into mainstream applications within the fields of software engineering and programming. Large Language Models LLMs, including recent models such as Code Llama, GPT-3.5, and GPT-4 (OpenAI, 2023), have demonstrated notable effectiveness in various code-related tasks. 

These tasks span code completion, program repair, debugging, test case generation, and code optimization. Code language models are commonly evaluated using benchmarks like HumanEval and MBPP, testing their ability to generate code snippets from natural language. While these benchmarks cover basic code generation tasks, there is a lack of benchmarks assessing other crucial dimensions, such as code understanding and execution.

Motivated by this objective, this paper by Meta AI introduces a novel benchmark named CRUXEval (Code Reasoning, Understanding, and eXecution Evaluation), featuring two tasks: – (1) CRUXEval-O for gauging code execution outcomes and (2) CRUXEval-I for evaluating code reasoning and understanding.

As shown above, CRUXEval focuses on assessing code language models’ competence in understanding the execution behavior of simple Python programs. While these models are not intended to replace interpreters for complex problems, CRUXEval ensures simplicity (maximum 13 lines, no complex arithmetic), making them solvable by a university-level CS graduate without excessive memory requirements. 

At a broad level, the construction of their benchmark involves several key steps. 

Initially, they employ Code Llama 34B to generate an extensive set of functions and corresponding inputs. The resulting outputs are derived by executing these functions on the provided inputs. 

They filter the set, focusing on short problems with minimal computation and memory requirements—issues that proficient human programmers should be capable of solving within a minute without additional memory.

Finally, they randomly select 800 samples that pass the filtering criteria, ensuring the benchmark is sufficiently compact for easy execution while being large enough to detect performance variations across various models. This methodology is chosen because, although creating examples where robust models like GPT-4 completely fail is challenging manually, there is observed frequent failure on random yet reasonable programs by these powerful models.

Researchers observed a selection of models on CRUXEval like StarCoder, WizardCoder, Code Llama, etc. The findings observed that the best setup, GPT-4 with chain of thought (CoT), achieves a pass@1 of 75% and 81% on input and output prediction, respectively. In contrast, Code Llama 34B achieves a pass@1 of 50% and 46% on input and output prediction, highlighting the gap between open and closed source models. After fine-tuning on samples very similar to those in our benchmark, Code Llama 34B could match the performance of GPT-4 on both input and output prediction. 

The fact that models like Phi, WizardCoder, and Phind outperformed Code Llama in HumanEval but not in CRUXEval underscores the need for a deeper investigation into the effectiveness of fine-tuning with data from more powerful models. Furthermore, the question of whether fine-tuning on execution information can enhance code generation abilities remains an intriguing aspect. As a prospect for future research, this benchmark provides a solid starting point for exploring the code reasoning capabilities of language models!

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel

Source link

Tags: benchmarkCodeCRUXEvalExecutionIntroducesMetaReasoningUnderstanding
Previous Post

US FAA extends Boeing MAX 9 grounding for new safety checks By Reuters

Next Post

Managing PHP Versions with Laravel Herd — SitePoint

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
Managing PHP Versions with Laravel Herd — SitePoint

Managing PHP Versions with Laravel Herd — SitePoint

University of Oxford Researchers Utilize Physics-Aware Machine Learning to Tackle Major Quantum Device Challenge

University of Oxford Researchers Utilize Physics-Aware Machine Learning to Tackle Major Quantum Device Challenge

Congressional leaders reach short-term spending deal to keep government open until March

Congressional leaders reach short-term spending deal to keep government open until March

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

May 19, 2024
How To Build A Quiz App With JavaScript for Beginners

How To Build A Quiz App With JavaScript for Beginners

February 22, 2024
Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

December 6, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In