Friday, May 16, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Enhancing Paragraph Generation with a Latent Language Diffusion Model

March 15, 2024
in Data Science & ML
Reading Time: 5 mins read
0 0
A A
0
Share on FacebookShare on Twitter



In the fast-evolving world of natural language processing (NLP), there is a strong demand for generating coherent and controlled text, as referenced in the work Toward Controlled Generation of Text. Traditional autoregressive models such as GPT, which have long been the industry standard, possess inherent limitations that sometimes manifest as repetitive and low-quality outputs, as seen in the work The Curious Case of Neural Text Degeneration. This is primarily due to a phenomenon known as “exposure bias,” as seen in the work Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks. This imperfection arises due to a mismatch between how these models are trained and their actual use during inference, often leading to error accumulation during text generation. To address these challenges, we wanted to call attention to a latent text diffusion model that we introduced in the fall of 2023. The model synergizes non-autoregressive latent semantic diffusion with autoregressive generation to overcome the hurdles faced by its predecessors. Specifically, we hope to conduct research to improve the experience of users who benefit from more diversified and controlled text generation. By adopting a latent diffusion approach (as discussed in High-Resolution Image Synthesis with Latent Diffusion Models and Latent Diffusion for Language Generation), PLANNER mitigates computational expenses typically associated with similar models, while simultaneously delivering superior diversity and cohesiveness, and reducing the repetition level of generated text, particularly in longer blocks of text and paragraphs, which have traditionally posed a challenge for text generation models. Our model, PLANNER, extends its benefit to various text generation tasks such as semantic generation, text completion, and summarization, with extensive evaluations of fluency, diversity, and repetition mitigation.

We begin with a variational paragraph embedder in stage 1 and evolve the coarse text through our latent diffusion model, PLANNER, for a finer coherent result in stage 3.

In stage 1 of Figure 1, a variational paragraph embedder encodes paragraphs into a series of latent codes. The encoder E and decoder D construct a bidirectional mapping between the discrete data space and the latent code space. The paragraph embeddings z are extracted by taking the first k hidden state vectors of dimension h from the final layer of E, which are fed into the initial steps of the decoder, which is trained to reconstruct the original text x. BOS and EOS represent “beginning of sentence” and “end of sentence” tokens, respectively. In stage 2 of Figure 1, these latent codes z are processed by a transformer-based latent diffusion model (as discussed in the work Scalable Diffusion Models with Transformers) for training, so that it can generate new latent codes over time during inference time, simulating the evolution of text from coarse to fine. Finally, in stage 3 the decoder D translates these evolving latent codes into coherent text. Our PLANNER latent diffusion model considers the conditioning signal as raw text, such as preceding context or the document to be summarized. We applied a conditional feature encoder τ to the input and used the hidden states at the last layer as y. We fed y and the time embedding t into the latent diffusion model through two channels, namely cross-attention and adaptive layer normalization. The aim of our research is to use existing text samples, such as an email or a summary of a document, to help generate longer texts that are both cohesive and readable. Examples in the following two figures are taken from a public dataset of text samples related to hotel reviews.

Comparison of GPT-2 large model and PLANNER results
Compare the fine-tuned GPT-2 large model results with the PLANNER results when generating text from a repetitive prompt.

Figure 2 compares two language models: a fine-tuned GPT-2 large model and our method. It showcases how each model handles a prompt designed to evaluate their ability to generate diversified text from a repetitive cue. We decided to select GPT-2 because it was the most relevant model at the time of conducting research. Starting with the fine-tuned GPT-2 large model, this model has been initialized using GPT-2 large, which has 774 million parameters. As for publicly available versions of GPT-2, OpenAI has released different sizes of GPT-2 models, including a large version that is accessible for researchers and developers. However, the particular fine-tuned version we used in our paper, PLANNER: Generating Diversified Paragraph via Latent Language Diffusion Model, may include proprietary dataset adjustments and may not be directly available. FT stands for fine-tuning, which is the process of taking a pre-trained model and training it further on a new dataset to specialize its knowledge. Greedy decoding is a method where, at each step in generating text, the model picks the word with the highest probability. Top-p sampling is a technique where the model chooses from the top p percent of probable words, allowing for more randomness and potential creativity in its output. 512 generation rollouts refers to the number of times the model generates text to test its capabilities. In this context, it means the model was used to generate text, starting from the prompt, 512 times for evaluation. N-grams are sequences of N tokens. The percentage numbers in the n-gram columns indicate the frequency of each n-gram’s appearance within the generated text by a specific method. A lower maximum percentage suggests that there is a larger variety of different n-grams, which is typically seen as desirable for the generation of text that is less repetitive and more diverse. “More diversified” implies that the generated sequences of words (n-grams) are more varied and less repetitive compared to the repetitive n-grams generated by other methods or models. This diversification generally indicates a higher quality of text generation that is more likely to generate useful and novel content for users. Lastly, we observed accumulative errors in traditional autoregressive models, such as the ones in GPT-2, where the model gets stuck in a loop and produces repetitive or unhelpful output. In the context given, the repeated phrase “awful hotel” in the generated text from GPT-2 is an example of such an accumulative error.

Progression of hotel review text generated by a diffusion model
This hotel review text generated by a diffusion model progresses over 10 steps, from a vague to a more distinct and richly detailed positive sentiment about the hotel experience.

Figure 3 illustrates the gradual evolution of generated text over a series of 10 steps. The model begins with coarse initial predictions (represented in Figure 3 as step 1, the initial state) and progresses by performing repeated processing steps to denoise and improve the text. The reader should envision this scenario not as a snapshot of text being entered or prompted by an iPhone user but as a systematic process by which a language model refines an initially vague or broad expression into a more detailed and specific review text. At step 1, the text is a rough suggestion of what the user might want to express — it is terse and lacks detail. As time progresses, the model fine-tunes the text, introducing more specific descriptions, sentiment, and sophisticated language. By step 10, the end state, the generated text resembles a thoughtfully composed review that one might expect from an experienced reviewer who gives particular attention to various aspects of their hotel stay. Thus, Figure 3 shows how the PLANNER model’s generation progresses from coarse to fine, giving readers a step-by-step visualization of how the text is iteratively enhanced to improve readability, specificity, and overall quality. The scenario starts with a minimal outline of positive sentiment and, over time, develops into a fleshed-out testimonial with vivid details emerging at each subsequent step.

Conclusion
The PLANNER model represents an advancement in the pursuit of improved natural language. Tackling the challenge of accumulative errors in…



Source link

Tags: DiffusionEnhancingGenerationlanguageLatentmodelParagraph
Previous Post

Designing RAGs. A guide to Retrieval-Augmented… | by Michał Oleszak | Mar, 2024

Next Post

ICICI Securities: Proxy advisors positive on ICICI Securities delisting proposal

Related Posts

AI Compared: Which Assistant Is the Best?
Data Science & ML

AI Compared: Which Assistant Is the Best?

June 10, 2024
5 Machine Learning Models Explained in 5 Minutes
Data Science & ML

5 Machine Learning Models Explained in 5 Minutes

June 7, 2024
Cohere Picks Enterprise AI Needs Over ‘Abstract Concepts Like AGI’
Data Science & ML

Cohere Picks Enterprise AI Needs Over ‘Abstract Concepts Like AGI’

June 7, 2024
How to Learn Data Analytics – Dataquest
Data Science & ML

How to Learn Data Analytics – Dataquest

June 6, 2024
Adobe Terms Of Service Update Privacy Concerns
Data Science & ML

Adobe Terms Of Service Update Privacy Concerns

June 6, 2024
Build RAG applications using Jina Embeddings v2 on Amazon SageMaker JumpStart
Data Science & ML

Build RAG applications using Jina Embeddings v2 on Amazon SageMaker JumpStart

June 6, 2024
Next Post
ICICI Securities: Proxy advisors positive on ICICI Securities delisting proposal

ICICI Securities: Proxy advisors positive on ICICI Securities delisting proposal

15 Recruiters Reveal If SQL Certifications Are Worth It

15 Recruiters Reveal If SQL Certifications Are Worth It

Run and manage open source InfluxDB databases with Amazon Timestream

Run and manage open source InfluxDB databases with Amazon Timestream

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
Porfo: Revolutionizing the Crypto Wallet Landscape

Porfo: Revolutionizing the Crypto Wallet Landscape

October 9, 2023
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

A Complete Guide to BERT with Code | by Bradney Smith | May, 2024

May 19, 2024
How To Build A Quiz App With JavaScript for Beginners

How To Build A Quiz App With JavaScript for Beginners

February 22, 2024
Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

Saginaw HMI Enclosures and Suspension Arm Systems from AutomationDirect – Library.Automationdirect.com

December 6, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In