Friday, May 16, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Researchers from ISTA Austria and Neural Magic Introduce QMoE: A Revolutionary Compression Framework for Efficient Execution of Trillion-Parameter Language Models

October 31, 2023
in AI Technology
Reading Time: 3 mins read
0 0
A A
0
Share on FacebookShare on Twitter


A neural network model designed to combine the output of multiple expert subnetworks to make predictions or decisions is called Mixture of Experts ( MoE ). This architecture is particularly useful when dealing with complex and diverse data, where different subsets or aspects of the data may require specialized models to handle effectively. MoE models are often more robust to outliers or noise in the data because they can learn to ignore the output of experts who perform poorly on certain inputs.

The computational cost of a MoE architecture can vary significantly depending on the model’s specific design, the complexity of the task it’s addressing, and the hardware used for training and inference. MoE architectures can be computationally more expensive than traditional neural networks, especially involving many experts and complex gating mechanisms. For example, the Switch Transformer-c2048 model has 1.6 trillion parameters, which require 3.2 TB of accelerator memory to run efficiently, which makes it challenging and expensive.

Researchers present a solution to this memory problem in a new framework called QMoE. It consists of a scalable algorithm that accurately compresses trillion parameter MoEs to less than 1 bit per parameter. QMoE can compress the 1.6 trillion parameters of the SwitchTransformer-c2048 model to less than 160 GB, which can be processed in less than a day on a single GPU. This is the first time accurate sub-1-bit compression of trillion parameters MoEs is feasible and can be achieved via affordable retraining-free compression techniques.

This is typically achieved by creating copies of certain model components, each responsible for processing only a subset of all input tokens. A router layer generally decides the corresponding input-to-component assignments. Quantization is the method that is currently used for reducing the model size and corresponding model weights to lower numerical precision. However, some MoEs are so large that reduction rates significantly higher than four times would be required to render them practical. Quantizing models to extremely low precision requires more sophisticated data-dependent methods.

Instead of training a neural network with full-precision (32-bit or 16-bit) weights and activations, data-dependent quantization methods train the model with quantized weights and activations. This helps the model learn to adapt to the limitations of lower-precision numerical representations. Popular frameworks and tools for data-dependent quantization include TensorFlow, PyTorch, and TensorRT, which provide built-in support for quantization-aware training and calibration.

Researchers have only considered the decoding operations and encoding matrices with reasonable efficiency. They plan to focus on the direct compression of the pretrained base model. In the future, their work will include finetuning a compressed model for specialized downstream tasks.

Check out the Paper and Github. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on Telegram and WhatsApp.

\"\"

Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.

🔥 Meet Retouch4me: A Family of Artificial Intelligence-Powered Plug-Ins for Photography Retouching



Source link

Tags: AustriaCompressionEfficientExecutionFrameworkIntroduceISTAlanguageMagicmodelsNeuralQMoEResearchersRevolutionaryTrillionParameter
Previous Post

Image Layer Animations with Clip-Path

Next Post

State Management With Provider | Kodeco

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
State Management With Provider | Kodeco

State Management With Provider | Kodeco

Protect your web apps from modern threats with Microsoft Defender for Cloud

Protect your web apps from modern threats with Microsoft Defender for Cloud

Using Crystalize.js with React for dynamic state management

Using Crystalize.js with React for dynamic state management

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

May 19, 2024
How To Build A Quiz App With JavaScript for Beginners

How To Build A Quiz App With JavaScript for Beginners

February 22, 2024
Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

December 6, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In