Sunday, June 29, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

This AI Paper Introduces a Novel Personalized Distillation Process: Enhancing Open-Source LLMs with Adaptive Learning from Closed-Source Counterparts

November 11, 2023
in Data Science & ML
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter


Researchers from Nanyang Technological University, Singapore, and Salesforce Research introduce a personalized distillation process for code generation tasks involving a student model’s initial task-solving attempt followed by adaptive refinement from a teacher model. The approach surpasses standard distillation methods, delivering superior results with only a third of the data. Personalized distillation is tested on two code generation models, CodeGen-mono-16B, and StarCoder, leading to substantial performance improvements in HumanEval assessments.

The study introduces personalized distillation for code generation tasks, a novel approach inspired by modern teaching principles. In this process, the student model initially attempts the task, receiving adaptive refinement from the teacher model. Personalized distillation consistently outperforms standard methods, achieving better results with only one-third of the data. Empirical studies confirm the effectiveness of customized labels for student learning. The approach significantly enhances the performance of open-source pretrained models, including CodeGen-mono-16B and StarCoder, in code generation tasks.

The method addresses the limitations of closed-source large language models (LLMs) like ChatGPT and GPT-4 regarding availability, cost, ethics, and data privacy concerns. It proposes personalized distillation for code generation tasks inspired by customized learning principles. The approach involves the student model attempting tasks, receiving execution feedback, and refining with teacher model guidance. Personalized distillation outperforms standard methods, achieving superior results with fewer data examples, offering a solution to distill the capabilities of closed-source LLMs into smaller open-source LLMs.

The study compared standard distillation (STAND) with two approaches: personalized distillation (PERsD), where the student initially attempts a task and receives customized feedback from the teacher, and input-personalized distillation (INPD), where only input tasks are personalized. Data was collected from code-alpaca and seed tasks from MBPP for pretraining. Performance was assessed using metrics like pass@1 and HumanEval to evaluate the methods’ effectiveness.

PERsD consistently outperformed standard distillation methods like INPD and STAND in code generation tasks, achieving significant improvements with only one-third of the data. Even with three times less data, PERsD outperformed STAND in 15 out of 16 settings, demonstrating the efficiency of personalized labeled data. Multi-step inference enhanced answer quality in PERsD-refine and PERsD-combine models, showcasing their ability to refine solutions based on execution error feedback. Mixing non-personalized labels with personalized labels generally had a detrimental impact, emphasizing the higher quality of customized tags.

PERsD introduced a method for customizing labeled data to student model capacity, yielding more effective learning. PERsD outperformed standard distillation in code generation on HumanEval and MBPP datasets, benefiting from higher data quality, multi-round distillation, and self-rectification via execution feedback. PERsD variants consistently outperformed non-personalized versions, highlighting the effectiveness of personalized labels. The approach represents a promising advancement in distilling closed-source LLM capabilities into open-source models.

Investigate online personalized distillation to collect data dynamically during fine-tuning, potentially enhancing student models. Explore scalable methods for personalized distillation that don’t rely on human annotation, addressing limitations like the impact of mixing personalized and non-personalized labels. Extend personalized distillation to other domains to assess its effectiveness. Also, consider using it for distilling closed-source LLM capabilities into open-source models, advancing model distillation further.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 32k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on Telegram and WhatsApp.

Hello, My name is Adnan Hassan. I am a consulting intern at Marktechpost and soon to be a management trainee at American Express. I am currently pursuing a dual degree at the Indian Institute of Technology, Kharagpur. I am passionate about technology and want to create new products that make a difference.

🔥 Meet Retouch4me: A Family of Artificial Intelligence-Powered Plug-Ins for Photography Retouching



Source link

Tags: AdaptiveClosedSourceCounterpartsdistillationEnhancingIntroducesLearningLLMsOpenSourcePaperPersonalizedProcess
Previous Post

China develops the world’s first 3D AI news anchor

Next Post

Bitdeer and NVIDIA Partner to Launch AI Cloud Service in Asia

Related Posts

AI Compared: Which Assistant Is the Best?
Data Science & ML

AI Compared: Which Assistant Is the Best?

June 10, 2024
5 Machine Learning Models Explained in 5 Minutes
Data Science & ML

5 Machine Learning Models Explained in 5 Minutes

June 7, 2024
Cohere Picks Enterprise AI Needs Over ‘Abstract Concepts Like AGI’
Data Science & ML

Cohere Picks Enterprise AI Needs Over ‘Abstract Concepts Like AGI’

June 7, 2024
How to Learn Data Analytics – Dataquest
Data Science & ML

How to Learn Data Analytics – Dataquest

June 6, 2024
Adobe Terms Of Service Update Privacy Concerns
Data Science & ML

Adobe Terms Of Service Update Privacy Concerns

June 6, 2024
Build RAG applications using Jina Embeddings v2 on Amazon SageMaker JumpStart
Data Science & ML

Build RAG applications using Jina Embeddings v2 on Amazon SageMaker JumpStart

June 6, 2024
Next Post
Bitdeer and NVIDIA Partner to Launch AI Cloud Service in Asia

Bitdeer and NVIDIA Partner to Launch AI Cloud Service in Asia

Home Automation 32CH Distribution Board DIY | Smart IOT Project 2023!

Home Automation 32CH Distribution Board DIY | Smart IOT Project 2023!

Craig S. Wright Explores ‘Code is Law’ Paradigm in Digital Governance Debate

Craig S. Wright Explores 'Code is Law' Paradigm in Digital Governance Debate

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
How ‘Chain of Thought’ Makes Transformers Smarter

How ‘Chain of Thought’ Makes Transformers Smarter

May 13, 2024
Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

October 2, 2023
The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

October 30, 2023
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Managing PDFs in Node.js with pdf-lib

Managing PDFs in Node.js with pdf-lib

November 16, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In