Sunday, June 8, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

HuggingFace Introduces TextEnvironments: An Orchestrator between a Machine Learning Model and A Set of Tools (Python Functions) that the Model can Call to Solve Specific Tasks

November 3, 2023
in AI Technology
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter


Supervised Fine-tuning (SFT), Reward Modeling (RM), and Proximal Policy Optimization (PPO) are all part of TRL.

In this full-stack library, researchers give tools to train transformer language models and stable diffusion models with Reinforcement Learning.

The library is an extension of Hugging Face’s transformers collection. Therefore, language models can be loaded directly via transformers after they have been pre-trained.

Most decoder and encoder-decoder designs are currently supported. For code snippets and instructions on how to use these programs, please consult the manual or the examples/ subdirectory.

Highlights

  • Easily tune language models or adapters on a custom dataset with the help of SFTTrainer, a lightweight and user-friendly wrapper around Transformers Trainer.
  • To quickly and precisely modify language models for human preferences (Reward Modeling), you can use RewardTrainer, a lightweight wrapper over Transformers Trainer.
  • To optimize a language model, PPOTrainer only requires (query, response, reward) triplets.
  • A transformer model with an additional scalar output for each token that can be utilized as a value function in reinforcement learning is presented in AutoModelForCausalLMWithValueHead and AutoModelForSeq2SeqLMWithValueHead.
  • Train GPT2 to write favourable movie reviews using a BERT sentiment classifier; implement a full RLHF using only adapters; make GPT-j less toxic; provide an example of stack-llama, etc.

How does TRL work?

In TRL, a transformer language model is trained to optimize a reward signal.

Human experts or reward models determine the nature of the reward signal.

The reward model is an ML model that estimates earnings from a specified stream of outputs.

Proximal Policy Optimization (PPO) is a reinforcement learning technique TRL uses to train the transformer language model.

Because it is a policy gradient method, PPO learns by modifying the transformer language model’s policy.

The policy can be considered a function that converts one series of inputs into another.

Using PPO, a language model can be fine-tuned in three main ways:

  1. Release: The linguistic model provides a possible sentence starter in answer to a question.
  2. The evaluation may involve using a function, a model, human judgment, or a mixture of these factors. Each query/response pair should ultimately result in a single numeric value.
  3. The most difficult aspect is undoubtedly optimization. The log-probabilities of tokens in sequences are determined using the query/response pairs in the optimization phase. The trained model and a reference model (often the pre-trained model before tuning) are used for this purpose. An additional reward signal is the KL divergence between the two outputs, which ensures that the generated replies are not too far off from the reference language model. PPO is then used to train the operational language model.

Key features

When compared to more conventional approaches to training transformer language models, TRL has several advantages.

  • In addition to text creation, translation, and summarization, TRL can train transformer language models for a wide range of other tasks.
  • Training transformer language models with TRL is more efficient than conventional techniques like supervised learning.
  • Resistance to noise and adversarial inputs is improved in transformer language models trained with TRL compared to those learned with more conventional approaches.
  • TextEnvironments is a new feature in TRL.

The TextEnvironments in TRL is a set of resources for developing RL-based language transformer models. They allow communication with the transformer language model and the production of results, which can be utilized to fine-tune the model’s performance. TRL uses classes to represent TextEnvironments. Classes in this hierarchy stand in for various contexts involving texts, for example, text generation contexts, translation contexts, and summary contexts. Several jobs, including those listed below, have employed TRL to train transformer language models.

Compared to text created by models trained using more conventional methods, TRL-trained transformer language models produce more creative and informative writing. It has been shown that transformer language models trained with TRL are superior to those trained with more conventional approaches for translating text from one language to another. Transformer language (TRL) has been used to train models that can summarize text more precisely and concisely than those trained using more conventional methods.

For more details visit GitHub page https://github.com/huggingface/trl.

To sum it up:

TRL is an effective method for using RL to train transformer language models. When compared to models trained with more conventional methods, TRL-trained transformer language models perform better in terms of adaptability, efficiency, and robustness. Training transformer language models for activities like text generation, translation, and summarization can be accomplished via TRL.

Check out the Github. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on Telegram and WhatsApp.

Introducing TextEnvironments in TRL 0.7.0! https://t.co/SuGrdSaMZh With TextEnvironments you can teach your language models to use tools to solve tasks more reliably. We trained models to use Wiki search and Python to answer trivia and math questions! Let’s have a look how🧵 https://pic.twitter.com/2ZuvBQJJsa— Leandro von Werra (@lvwerra) August 30, 2023

Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.

🔥 Meet Retouch4me: A Family of Artificial Intelligence-Powered Plug-Ins for Photography Retouching



Source link

Tags: CallfunctionsHuggingFaceIntroducesLearningMachinemodelOrchestratorPythonsetSolveSpecifictasksTextEnvironmentsTools
Previous Post

NVMe vs. SATA: What’s the difference?

Next Post

Hyperparameter Tuning: GridSearchCV and RandomizedSearchCV, Explained

Related Posts

How insurance companies can use synthetic data to fight bias
AI Technology

How insurance companies can use synthetic data to fight bias

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset
AI Technology

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper
AI Technology

Decoding Decoder-Only Transformers: Insights from Google DeepMind’s Paper

June 9, 2024
How Game Theory Can Make AI More Reliable
AI Technology

How Game Theory Can Make AI More Reliable

June 9, 2024
Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs
AI Technology

Buffer of Thoughts (BoT): A Novel Thought-Augmented Reasoning AI Approach for Enhancing Accuracy, Efficiency, and Robustness of LLMs

June 9, 2024
Deciphering Doubt: Navigating Uncertainty in LLM Responses
AI Technology

Deciphering Doubt: Navigating Uncertainty in LLM Responses

June 9, 2024
Next Post
Hyperparameter Tuning: GridSearchCV and RandomizedSearchCV, Explained

Hyperparameter Tuning: GridSearchCV and RandomizedSearchCV, Explained

Apache Kafka and Apache Flink: An open-source match made in heaven

Apache Kafka and Apache Flink: An open-source match made in heaven

Bank of America warns customers about deposit delays – report

Bank of America warns customers about deposit delays - report

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
Accenture creates a regulatory document authoring solution using AWS generative AI services

Accenture creates a regulatory document authoring solution using AWS generative AI services

February 6, 2024
Managing PDFs in Node.js with pdf-lib

Managing PDFs in Node.js with pdf-lib

November 16, 2023
Graph neural networks in TensorFlow – Google Research Blog

Graph neural networks in TensorFlow – Google Research Blog

February 6, 2024
13 Best Books, Courses and Communities for Learning React — SitePoint

13 Best Books, Courses and Communities for Learning React — SitePoint

February 4, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In