Probabilistic diffusion models, a state-of-the-art category of generative models, have emerged as a crucial area of research, especially in the field of computer vision. These models, unlike other types of generative models like VAE and GANs, introduce a new paradigm for generating data. They utilize a fixed Markov chain to map the latent space, allowing for complex mappings that capture the structural complexities within a dataset. In recent years, these models have demonstrated impressive generative capabilities, including high levels of detail and diversity in the generated examples. As a result, they have contributed to groundbreaking advancements in computer vision applications such as image synthesis, image editing, image-to-image translation, and text-to-video generation.
The diffusion models consist of two main components: the diffusion process and the denoising process. During the diffusion process, Gaussian noise is gradually added to the input data, transforming it into nearly pure Gaussian noise. On the other hand, the denoising process aims to recover the original input data from its noisy state using a sequence of learned inverse diffusion operations. Typically, a U-Net architecture is utilized to predict the noise removal iteratively at each denoising step. Previous research has primarily focused on using pre-trained diffusion U-Nets for downstream applications, with limited exploration of the internal characteristics of the diffusion U-Net.
A joint study conducted by the S-Lab and the Nanyang Technological University takes a departure from the conventional application of diffusion models by investigating the effectiveness of the diffusion U-Net in the denoising process. To gain a deeper understanding of this process, the researchers introduce a paradigm shift towards the Fourier domain to observe the generation process of diffusion models, which is a relatively unexplored research area. The figure provided illustrates the progressive denoising process, showcasing the generated images at successive iterations. Additionally, the figure presents the associated low-frequency and high-frequency spatial domain information after the inverse Fourier Transform, revealing the gradual modulation of low-frequency components and the more pronounced dynamics of high-frequency components throughout the denoising process.
Based on these observations, the study extends to determine the specific contributions of the U-Net architecture within the diffusion framework. It is found that the primary backbone of the U-Net plays a significant role in denoising, while the skip connections introduce high-frequency features into the decoder module, aiding in the recovery of fine-grained semantic information. However, the propagation of high-frequency features can unintentionally weaken the denoising capabilities of the backbone during the inference phase, leading to the generation of abnormal image details.
To address this issue, the researchers propose a new approach called “FreeU,” which enhances the quality of generated samples without requiring additional computational overhead from training or fine-tuning. This approach introduces two specialized modulation factors during the inference phase to balance the contributions of features from the primary backbone and skip connections of the U-Net architecture. The first factor, “backbone feature factors,” amplifies the feature maps of the primary backbone to strengthen the denoising process. The second factor, “skip feature scaling factors,” mitigates the problem of texture over-smoothing caused by the inclusion of backbone feature scaling factors.
The FreeU framework demonstrates seamless adaptability when integrated with existing diffusion models, resulting in a noticeable enhancement in the quality of the generated outputs. Experimental evaluations are conducted using foundational models, and the visual representation provides evidence of FreeU’s effectiveness in improving intricate details and overall visual fidelity.
In summary, FreeU is a novel AI technique that enhances the output quality of generative models without requiring additional training or fine-tuning. The research conducted by the S-Lab and the Nanyang Technological University provides valuable insights into the denoising process of diffusion models and proposes a solution to improve the generation of high-quality samples.
Source link