Thursday, May 8, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Google DeepMind Introduces Zipper: A Multi-Tower Decoder Architecture for Fusing Modalities

June 4, 2024
in AI Technology
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter


Integrating multiple generative foundation models helps by combining the strengths of models trained on different modalities, such as text, speech, and images, enabling the system to perform cross-modal tasks effectively. This integration allows for the efficient generation of outputs across multiple modalities simultaneously, leveraging the specific capabilities of each model. The two key issues in integrating multiple generative foundation models are the availability of aligned data across modalities and the effective utilization of unimodal representations in cross-domain generative tasks without compromising their original capabilities.

Google DeepMind researchers introduced Zipper to address the challenge of integrating multiple generative foundation models trained on different modalities into a unified framework beyond simple concatenation. Current approaches to multimodal generative models often rely on pre-training models with vocabulary expansion techniques or fine-tuning them on aligned multimodal data. However, these methods have drawbacks, including inflexibility in adding new modalities post-pre-training and the necessity for large quantities of aligned cross-modal data, especially when dealing with novel modalities. The proposed Zipper architecture, in contrast, offers a novel solution by leveraging independently pre-trained unimodal decoders and composing them using cross-attention mechanisms. This approach allows for the flexible reuse and re-purposing of pre-trained decoders while preserving unimodal performance.

The Zipper architecture consists of multiple autoregressive decoder towers, each independently pre-trained on a single modality using next-token prediction. These decoders are then combined using gated cross-attention layers, which enable the interchange of information between modalities at regular intervals. The architecture can equalize embedding dimension size differences and transform representations from one modality to another by inserting projection layers between modalities during cross-attention. During inference, the model generates output in the specified sequence of modalities until completion.

For the experiments to evaluate the proposed model, researchers used variants of PaLM2 models for the text backbone and a similar architecture for the speech backbone, pre-trained from scratch on the LibriLight dataset. Zipper’s competitive performance with the baseline indicates that freezing the text backbone does not significantly impact automatic speech recognition (ASR) performance. Zipper significantly outperforms the baseline for Text-to-Speech, particularly when the speech backbone is unfrozen. These experiments highlight Zipper’s ability to preserve unimodal capabilities and better alignment capabilities of cross-attention. Zipper was able to achieve meaningful results with just 1% of the original training data, demonstrating superior performance with significantly less aligned data.

In conclusion, the Zipper architecture offers a flexible and scalable solution for integrating independently pre-trained unimodal decoders. Zipper uses cross-attention mechanisms to make modality composition work well even without extensive aligned data. It also keeps unimodal performance high while getting competitive results in cross-modal tasks. This approach could advance multimodal generative modeling across various domains and pave the way for future research combining more modalities.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter.

Don’t Forget to join our 43k+ ML SubReddit | Also, check out our AI Events Platform

Pragati Jhunjhunwala is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Kharagpur. She is a tech enthusiast and has a keen interest in the scope of software and data science applications. She is always reading about the developments in different field of AI and ML.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…



Source link

Tags: ArchitectureDecoderDeepMindFusingGoogleIntroducesModalitiesMultiTowerZipper
Previous Post

Building Credibility for Your Digital Agency 

Next Post

What I learned from the UN’s “AI for Good” summit

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
What I learned from the UN’s “AI for Good” summit

What I learned from the UN’s "AI for Good" summit

Agility, flexibility and security: The value of cloud in HPC

Agility, flexibility and security: The value of cloud in HPC

What is Underfitting and Overfitting in Machine Learning?

What is Underfitting and Overfitting in Machine Learning?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

May 19, 2024
A faster, better way to prevent an AI chatbot from giving toxic responses | MIT News

A faster, better way to prevent an AI chatbot from giving toxic responses | MIT News

April 10, 2024
Part 1: ABAP RESTful Application Programming Model (RAP) – Introduction

Part 1: ABAP RESTful Application Programming Model (RAP) – Introduction

November 20, 2023
Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

December 6, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In