Posted by Yang Zhao, Senior Software Engineer, and Tingbo Hou, Senior Staff Software Engineer, Core ML
Text-to-image diffusion models have demonstrated impressive abilities in generating high-quality images from text prompts. However, these models typically have billions of parameters and require powerful desktops or servers to run effectively. Although there have been advancements in inference solutions on Android and iOS devices, achieving rapid text-to-image generation on mobile devices has been challenging.
In our paper, “MobileDiffusion: Subsecond Text-to-Image Generation on Mobile Devices,” we propose a novel approach to enable rapid text-to-image generation on mobile devices. MobileDiffusion is an efficient latent diffusion model specifically designed for mobile devices. We also utilize DiffusionGAN to achieve one-step sampling during inference, which fine-tunes a pre-trained diffusion model while leveraging a GAN for denoising.
We have tested MobileDiffusion on premium iOS and Android devices, and it can generate a high-quality 512×512 image in just half a second. With a model size of only 520M parameters, it is well-suited for mobile deployment.
The relative inefficiency of text-to-image diffusion models on mobile devices arises from two main challenges. Firstly, the iterative denoising process required by diffusion models necessitates multiple evaluations of the model, leading to slower performance. Secondly, the complexity of the network architecture in these models, which often involves a substantial number of parameters, further contributes to computational expense.
Previous research has primarily focused on addressing the challenge of reducing the number of function evaluations in text-to-image diffusion models. While these advancements have reduced the number of necessary sampling steps, even a small number of evaluation steps can be slow on mobile devices due to the complexity of the model architecture. The architectural efficiency of these models has received less attention.
To overcome these challenges, we conducted a detailed examination of each component and computational operation within the UNet architecture of Stable Diffusion. This analysis led to the development of MobileDiffusion, which consists of a text encoder, a diffusion UNet, and an image decoder. We used CLIP-ViT/L14 as the text encoder, while the diffusion UNet incorporated more transformers in the middle and utilized lightweight separable convolution layers. The image decoder was optimized through pruning and achieved significant performance improvements.
In addition to optimizing the model architecture, we adopted a DiffusionGAN hybrid approach to enable one-step sampling. By using a pre-trained diffusion UNet to initialize the generator and discriminator, we streamlined the training process and achieved convergence in less than 10K iterations.
Our MobileDiffusion with DiffusionGAN one-step sampling has demonstrated the ability to generate high-quality images across various domains. It is highly efficient, with a latency of half a second to generate a 512×512 image on mobile devices.
In conclusion, MobileDiffusion has the potential to enable rapid text-to-image generation on mobile devices, providing a seamless image generation experience while typing text prompts. We will ensure that any application of this technology aligns with Google’s responsible AI practices.
We would like to express our gratitude to our collaborators and contributors who have assisted in the development and deployment of MobileDiffusion on mobile devices.
Source link