Saturday, June 28, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Researchers at the University of Glasgow Propose Shallow Cross-Encoders as an AI-based Solution for Low-Latency Information Retrieval

April 3, 2024
in Data Science & ML
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter


In our rapidly evolving digital world, the demand for instant gratification has never been higher. Whether we’re searching for information, products, or services, we expect our queries to be answered with lightning speed and pinpoint accuracy. However, the quest for speed and precision often presents a formidable challenge for modern search engines.

Traditional retrieval models face a fundamental trade-off: the more accurate they are, the higher the computational cost and latency. This latency can be a deal-breaker, negatively impacting user satisfaction, revenue, and energy efficiency. Researchers have been grappling with this conundrum, seeking ways to deliver both effectiveness and efficiency in a single package.

In a groundbreaking study, a team of researchers from the University of Glasgow has unveiled an ingenious solution that harnesses the power of smaller, more efficient transformer models to achieve lightning-fast retrieval without sacrificing accuracy. Meet shallow Cross-Encoders: a novel AI approach that promises to revolutionize the search experience.

Shallow Cross-Encoders are based on transformer models with fewer layers and reduced computational requirements. Unlike their larger counterparts, such as BERT or T5, these handy models can estimate the relevance of more documents within the same time budget, potentially leading to better overall effectiveness in low-latency scenarios.

But training these smaller models effectively is no easy feat. Conventional techniques often result in overconfidence and instability, hampering performance. To overcome this challenge, the researchers introduced an ingenious training scheme called gBCE (Generalized Binary Cross-Entropy), which mitigates the overconfidence problem and ensures stable, accurate results.

The gBCE training scheme incorporates two key components: (1) an increased number of negative samples per positive instance and (2) the gBCE loss function, which counters the effects of negative sampling. By carefully balancing these elements, the researchers were able to train highly effective shallow Cross-Encoders that consistently outperformed their larger counterparts in low-latency scenarios.

In a series of rigorous experiments, the researchers evaluated a range of shallow Cross-Encoder models, including TinyBERT (2 layers), MiniBERT (4 layers), and SmallBERT (4 layers), against full-size baselines like MonoBERT-Large and MonoT5-Base. The outcome was exceedingly impressive.

On the TREC DL 2019 dataset, the diminutive TinyBERT-gBCE model achieved an NDCG@10 score of 0.652 when the latency was limited to a mere 25 milliseconds – a staggering 51% improvement over the much larger MonoBERT-Large model (NDCG@10 of 0.431) under the same latency constraint.

However, the advantages of shallow cross-encoders extend beyond sheer speed and accuracy. These compact models also offer significant benefits in terms of energy efficiency and cost-effectiveness. With their modest memory footprints, they can be deployed on a wide range of devices, from powerful data centers to resource-constrained edge devices, without the need for specialized hardware acceleration.

Imagine a world where your search queries are answered with lightning speed and pinpoint accuracy, whether you’re using a high-end workstation or a modest mobile device. This is the promise of shallow Cross-Encoders, a game-changing solution that could redefine the search experience for billions of users worldwide.

As the research team continue to refine and optimize this groundbreaking technology, we can look forward to a future where the trade-off between speed and accuracy becomes a thing of the past. With shallow Cross-Encoders at the forefront, the pursuit of instantaneous, accurate search results is no longer a distant dream – it’s a tangible reality within our grasp.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter.

Don’t Forget to join our 39k+ ML SubReddit

Vibhanshu Patidar is a consulting intern at MarktechPost. Currently pursuing B.S. at Indian Institute of Technology (IIT) Kanpur. He is a Robotics and Machine Learning enthusiast with a knack for unraveling the complexities of algorithms that bridge theory and practical applications.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…



Source link

Tags: AIbasedCrossEncodersGlasgowInformationlowlatencyProposeResearchersretrievalShallowSolutionUniversity
Previous Post

Researchers map how the brain regulates emotions

Next Post

Chat with AI in RStudio

Related Posts

AI Compared: Which Assistant Is the Best?
Data Science & ML

AI Compared: Which Assistant Is the Best?

June 10, 2024
5 Machine Learning Models Explained in 5 Minutes
Data Science & ML

5 Machine Learning Models Explained in 5 Minutes

June 7, 2024
Cohere Picks Enterprise AI Needs Over ‘Abstract Concepts Like AGI’
Data Science & ML

Cohere Picks Enterprise AI Needs Over ‘Abstract Concepts Like AGI’

June 7, 2024
How to Learn Data Analytics – Dataquest
Data Science & ML

How to Learn Data Analytics – Dataquest

June 6, 2024
Adobe Terms Of Service Update Privacy Concerns
Data Science & ML

Adobe Terms Of Service Update Privacy Concerns

June 6, 2024
Build RAG applications using Jina Embeddings v2 on Amazon SageMaker JumpStart
Data Science & ML

Build RAG applications using Jina Embeddings v2 on Amazon SageMaker JumpStart

June 6, 2024
Next Post
Chat with AI in RStudio

Chat with AI in RStudio

SK Hynix plans to invest $3.87 billion in U.S. chip facility

SK Hynix plans to invest $3.87 billion in U.S. chip facility

How to Start a Web Design Agency in Just 12 Easy Steps?

How to Start a Web Design Agency in Just 12 Easy Steps?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
How ‘Chain of Thought’ Makes Transformers Smarter

How ‘Chain of Thought’ Makes Transformers Smarter

May 13, 2024
Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

October 2, 2023
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

October 30, 2023
Managing PDFs in Node.js with pdf-lib

Managing PDFs in Node.js with pdf-lib

November 16, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In