Sunday, June 1, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Researchers from Microsoft Research and Tsinghua University Proposed Skeleton-of-Thought (SoT): A New Artificial Intelligence Approach to Accelerate Generation of LLMs

November 24, 2023
in AI Technology
Reading Time: 3 mins read
0 0
A A
0
Share on FacebookShare on Twitter


Large Language Models (LLMs), such as GPT-4 and LLaMA, have undoubtedly transformed the technological landscape. However, sluggish processing speed is a recurring challenge limiting their widespread applicability. Despite their remarkable capabilities, the time it takes to obtain responses from LLMs hinders their effectiveness, particularly in latency-critical applications like chatbots, copilots, and industrial controllers. Recognizing the need for a solution that addresses this fundamental problem, Microsoft Research and Tsinghua University researchers have introduced an innovative approach named Skeleton-of-Thought (SoT).

Traditionally, efforts to enhance LLMs’ speed have involved intricate modifications to the models, systems, or hardware. However, the research team takes a different route with SoT. Unlike conventional methods, SoT refrains from making extensive changes to LLMs and treats them as black boxes instead. The focus shifts from altering the internal workings of the models to optimizing the organization of their output content. The proposed solution prompts LLMs to follow a unique two-stage process. In the first stage, the LLM is directed to derive a skeleton of the answer. Subsequently, in the second stage, the LLM is tasked with the parallel expansion of multiple points within the skeleton. This approach introduces a novel means of boosting LLM response times without requiring complex adjustments to the model architecture.

The methodology of SoT involves breaking down the content generation process into two distinctive stages. Firstly, the LLM is prompted to construct a skeleton of the answer. This initial step aligns with how humans often approach problem-solving by outlining a high-level structure. The second stage leverages this skeleton to execute parallel expansion, enabling the LLM to address multiple points simultaneously. Remarkably, this approach is applicable to open-source models like LLaMA and API-based models such as GPT-4, showcasing its versatility.

\"\"

To evaluate the effectiveness of SoT, the research team conducted extensive tests on 12 recently released models, spanning both open-source and API-based categories. The team observed substantial speed-ups by utilizing the Vicuna-80 dataset, which includes questions from various domains like coding, math, writing, and roleplay. SoT achieved speed-ups ranging from 1.13x to 2.39x on eight 12 models. Crucially, these speed-ups were attained without sacrificing answer quality. The team used metrics from FastChat and LLMZoo to assess the quality of SoT’s answers, showcasing its ability to maintain or improve response quality across diverse question categories.

\"\"

In conclusion, SoT emerges as a promising solution to the persistent challenge of slow LLMs. The research team’s innovative approach of treating LLMs as black boxes and focusing on data-level efficiency optimization provides a fresh perspective on accelerating content generation. By prompting LLMs to construct a skeleton of the answer and then executing parallel expansion, SoT introduces an effective means of improving response times. The results from the evaluation demonstrate not only considerable speed-ups but also the ability to maintain or enhance answer quality, addressing the dual challenges of efficiency and effectiveness. This work opens up avenues for future exploration in dynamic thinking processes for artificial intelligence, encouraging a shift towards more efficient and versatile language models.

Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

\"\"

Source link

Tags: AccelerateApproachartificialGenerationintelligenceLLMsMicrosoftProposedÂSkeletonofThoughtResearchResearchersSoTTsinghuaUniversity
Previous Post

How Huawei’s use of 5G and AI is transforming China’s coal mining industry

Next Post

Sugar stocks: Sugar stocks sweeten gains on talk of supply shortages in key states

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
Sugar stocks: Sugar stocks sweeten gains on talk of supply shortages in key states

Sugar stocks: Sugar stocks sweeten gains on talk of supply shortages in key states

Saudi Arabia may make surprise oil cuts, sending prices soaring and the futures market into backwardation, says a 30-year energy expert

Saudi Arabia may make surprise oil cuts, sending prices soaring and the futures market into backwardation, says a 30-year energy expert

Popular programming languages 💻2000 — 2023

Popular programming languages 💻2000 — 2023

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
Accenture creates a regulatory document authoring solution using AWS generative AI services

Accenture creates a regulatory document authoring solution using AWS generative AI services

February 6, 2024
Managing PDFs in Node.js with pdf-lib

Managing PDFs in Node.js with pdf-lib

November 16, 2023
Best headless UI libraries in React Native

Best headless UI libraries in React Native

September 28, 2023
NousResearch Released Nous-Hermes-2-Mixtral-8x7B: An Open-Source LLM with SFT and DPO Versions

NousResearch Released Nous-Hermes-2-Mixtral-8x7B: An Open-Source LLM with SFT and DPO Versions

January 25, 2024
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In