There are two challenges in voice cloning: 1) Flexible Voice Style Control- Many Instant Voice Cloning (IVC) approaches cannot manipulate voice styles after cloning flexibly. Numerous methods need to be revised to influence various aspects of voice styles precisely. This includes emotions, accents, rhythm, pauses, and intonation, along with accurately reproducing the distinctive tone characteristics of a reference speaker. 2) Zero-Shot Cross-Lingual Voice Cloning- Many IVC approaches require extensive massive-speaker multi-lingual (MSML) datasets for all languages.
A team of MIT, MyShell.ai, and Tsinghua University researchers have proposed OpenVoice, an open-source method for instant voice cloning. This approach can replicate their voice and generate speech in various languages with just a short audio sample from the reference speaker. OpenVoice can clone the tone color. OpenVoice provides adaptable manipulation of critical style elements such as emotion, accent, rhythm, pauses, and intonation. These features are vital in crafting contextually authentic speech and dynamic conversations, steering away from a monotonous narration of input text.
OpenVoice achieves zero-shot cross-lingual voice cloning for languages not included in the massive speaker training set without requiring extensive training data for those languages. The technical approach of OpenVoice involves:
Decoupling the components in a voice as much as possible.
Independently generating language.
Tone color.
Other voice features.
The tone color cloning in OpenVoice is achieved through a tone color converter structurally similar to flow-based TTS methods but has different functionalities and training objectives.
The base speaker TTS model in OpenVoice is trained using audio samples from English, Chinese, and Japanese speakers, with the ability to change accent, language, and emotions. OpenVoice is computationally efficient, costing tens of times less than commercially available APIs.
OpenVoice achieves versatile instant voice cloning by replicating the voice of a reference speaker and generating speech in multiple languages. The approach enables granular control over voice styles, including emotion, accent, rhythm, pauses, and intonation, while accurately cloning the tone color of the reference speaker. The model can accurately clone the tone color of the reference speaker even when the language of the reference speaker or the generated speech is unseen in the training dataset. OpenVoice demonstrates superior performance compared to commercially available APIs while being computationally efficient.
In conclusion, OpenVoice showcases impressive capabilities in instant voice cloning, surpassing prior methods in flexibility regarding voice styles and languages. The fundamental idea behind this approach is rooted in the notion that training a base speaker TTS model to handle voice styles and languages is relatively straightforward, as long as the model isn’t tasked with cloning the exact tone color of the reference speaker. As a result, OpenVoice introduces a remarkable design principle by separating the cloning of tone color from other voice styles and language components, enhancing its overall versatility.
Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 35k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
Sana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.