Sunday, June 8, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Researchers from the National University of Singapore and Alibaba Propose InfoBatch: A Novel Artificial Intelligence Framework Aiming to Achieve Lossless Training Acceleration by Unbiased Dynamic Data Pruning

January 20, 2024
in AI Technology
Reading Time: 3 mins read
0 0
A A
0
Share on FacebookShare on Twitter



The struggle to balance training efficiency with performance has become increasingly pronounced within computer vision. Traditional training methodologies, often reliant on expansive datasets, substantially burden computational resources, creating a notable barrier for researchers with limited access to high-powered computing infrastructures. This issue is compounded by the fact that many existing solutions, while reducing the sample size for training, inadvertently introduce additional overheads or fail to maintain the model’s original performance level, thus negating the benefits of their implementation.

Central to this challenge is the quest to optimize the training of deep learning models, a task that is as resource-intensive as it is crucial. The primary obstacle is the computational demand of training on extensive datasets without compromising the model’s effectiveness. This has emerged as a vital concern in the field, where efficiency and performance must coexist harmoniously to advance practical and accessible machine learning applications.

The existing solutions landscape includes methods like dataset distillation and corset selection, both aimed at reducing the training sample size. While these approaches are intuitively appealing, they introduce new complexities. Static pruning methods, for example, which select samples based on specific metrics before training, often incur additional computational costs and need help with generalizability across various architectures or datasets. On the other hand, dynamic data pruning methods aim to cut training costs by decreasing the number of iterations. However, these methods have limitations, primarily in achieving lossless results and operational efficiency.

The National University of Singapore and Alibaba Group researchers introduced InfoBatch, an innovative framework designed to accelerate training without sacrificing accuracy. InfoBatch distinguishes itself from previous methodologies through its dynamic approach to data pruning, which is unbiased and adaptable. It maintains and dynamically updates a loss-based score for each data sample throughout the training process. The framework then selectively prunes less informative samples, identified by their low score, and compensates for this pruning by scaling up the gradients of the remaining samples. This strategy effectively maintains a gradient expectation similar to the original, unpruned dataset, thereby preserving the model’s performance.

The framework has demonstrated its capability to significantly reduce computational overhead, outperforming previous state-of-the-art methods by at least tenfold in efficiency. This efficiency gain does not come at the cost of performance; InfoBatch consistently achieves lossless training results across various tasks, including classification, semantic segmentation, vision pertaining, and fine-tuning language model instruction. In practical terms, this translates to substantial cost savings in computational resources and time. For instance, when applied to datasets like CIFAR10/100 and ImageNet1K, InfoBatch has been shown to save up to 40% of the overall cost. Even more impressively, the cost savings climb to 24.8% and 27% for specific models such as MAE and diffusion models.

https://arxiv.org/abs/2303.04947

To summarize, the key takeaways from the InfoBatch research include:

– InfoBatch introduces a novel framework for unbiased dynamic data pruning, setting it apart from traditional static and dynamic pruning methods.
– The framework dramatically reduces the computational overhead, making it practical for real-world applications, especially those with limited computational resources.
– Despite the efficiency improvements, InfoBatch consistently achieves lossless training results across various tasks.
– The framework’s versatility is demonstrated through its effective application in diverse machine learning tasks, from classification to language model instruction fine-tuning.
– InfoBatch’s balance of efficiency and performance can significantly influence the future of machine learning training methodologies.

In conclusion, the development of InfoBatch represents a significant stride forward in machine learning, offering a practical solution to a longstanding challenge in the field. By efficiently balancing training costs with model performance, InfoBatch stands as a testament to the innovative progress in computational efficiency in machine learning.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel

I am happy to share that our paper has been accepted by ICLR as an ORAL paper (1.2% acceptance rate).InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruninghttps://t.co/YGKWPoKIgGInfoBatch randomly prunes a portion of less informative samples based on the… pic.twitter.com/N5jdxALpB7— Yang You (@YangYou1991) January 16, 2024

Hello, My name is Adnan Hassan. I am a consulting intern at Marktechpost and soon to be a management trainee at American Express. I am currently pursuing a dual degree at the Indian Institute of Technology, Kharagpur. I am passionate about technology and want to create new products that make a difference.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…



Source link

Tags: AccelerationAchieveaimingAlibabaartificialdatadynamicFrameworkInfoBatchintelligenceLosslessNationalProposePruningResearchersSingaporetrainingUnbiasedUniversity
Previous Post

Career Paths in the Modern World: Exploring Diverse Opportunities

Next Post

Debiex Accused of $2.3M Romance Scam by CFTC

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
Debiex Accused of $2.3M Romance Scam by CFTC

Debiex Accused of $2.3M Romance Scam by CFTC

Winter Youth Olympics in South Korea hit by heavy snowfall By Reuters

Winter Youth Olympics in South Korea hit by heavy snowfall By Reuters

Can We Optimize AI for Information Retrieval with Less Compute? This AI Paper Introduces InRanker: a Groundbreaking Approach to Distilling Large Neural Rankers

Can We Optimize AI for Information Retrieval with Less Compute? This AI Paper Introduces InRanker: a Groundbreaking Approach to Distilling Large Neural Rankers

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
Accenture creates a regulatory document authoring solution using AWS generative AI services

Accenture creates a regulatory document authoring solution using AWS generative AI services

February 6, 2024
Managing PDFs in Node.js with pdf-lib

Managing PDFs in Node.js with pdf-lib

November 16, 2023
Graph neural networks in TensorFlow – Google Research Blog

Graph neural networks in TensorFlow – Google Research Blog

February 6, 2024
13 Best Books, Courses and Communities for Learning React — SitePoint

13 Best Books, Courses and Communities for Learning React — SitePoint

February 4, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In