This Amazing Voice Cloning Software Triggered An Ethics Debate

Updated on

An artificial intelligence (AI) platform designed by the Canadian AI start-up Lyrebird can mimic anyone’s voice just by listening to an audio clip for less than a minute. Last month in a public demo of their voice cloning software, founders of the Montreal-based AI company posted fake recordings of Donald Trump, Barack Obama and Hillary Clinton.

Voice cloning software: boon or bane?

Lyrebird, which aims to “create the most realistic voices in the world,” has already helped many with its technology, including the founder of the Ice Bucket Challenge, Pat Quinn, who is slowly losing his mobility and has already lost his speech after falling prey to the motor neuron disease. Using video clips of Quinn’s speeches, Lyrebird’s voice algorithm replaced the text-to-speech robotic voice Quinn previously used to communicate.

To clone your voice, a user must register on Lyrebird’s website and then spend about a minute recording their voice. To record their voices, users need to say fairly obscure sentences like, “Most people there take it out pretty early in life,” and, “Thousands of letters danced across the amateur author’s screen.”

Lyrebird’s voice cloning software is surely amazing, but every new technology has its downsides as well. Lyrebird co-founder José Sotelo explained the malicious ways this new tech can be misused while addressing the bigger question about the blurring of lines between reality and fiction.

“Unfortunately, technology, it’s not possible to stop it,” he told Bloomberg in an interview posted on YouTube. “It’s not something that we should be really afraid of. It’s something that we should be careful about, but I feel, enthusiastic about.”

Are voice recordings considered legal evidence?

Concerns about the new technology are obvious, given the abundance of fake news everywhere. However, the startup has put an ethics statement on its website describing “important societal issues” related to fabricated voice recordings. According to Lyrebird, voice recordings are taken as a strong piece of evidence in societies and jurisdiction in many countries, but such evidence is quite easy to manipulate.

Thus, considering recordings as legal evidence could lead to dangerous consequences, such as misleading diplomats, fraud and other problems by stealing someone’s authority.

“By releasing our technology publicly and making it available to anyone, we want to ensure that there will be no such risks,” the website said, adding that they want everyone to be aware that such a technology exists and that impersonating someone else’s voice is possible.

Lyrebird plans to offer an API so third parties can use its voice cloning software for their own purposes. When asked if making the voice cloning software available to all is safe, Alexandre de Brébisson, CEO and cofounder of Lyrebird, told TechCrunch earlier that they want to educate people that such a technology exists and that audio recordings are not as reliable as everyone assumes them to be.

“It is similar to what Photoshop did,” the expert said, adding that not releasing the technology due to its potential misuse would not solve the problem because the positive aspects outweigh the bad ones. Further, if the company does not release the technology now, others may in the future, and possibly with bad intentions.

AI everywhere, from voices to faces

AI is now almost everywhere. There is no dearth of technology companies embedding AI to come up with never-before-heard functionalities and apps. FakeApp is one such AI-powered synthesizing tool that swaps faces in videos. While some of the swapped faces may fit perfectly, others might not hit their target. Overall, they do exhibit the rising clout of AI in imitating human appearance and behavior.

A decade ago, if someone wanted to fake something, it was possible, but it demanded an expert in computer graphics. Additionally, it was very difficult to keep such developments under wraps, as they involved many people in the process. Now all such tricks are available at the push of a button.

Since January, FakeApp has been downloaded more than 100,000 times and triggered a series of fake pornographic videos featuring celebrities and politicians. Reddit has already banned the app and its related communities from the platform.

To use FakeApp, a user must feed in several hundred pictures of the source and target faces. The program will then run deep learning algorithms to find patterns and similarities between the two faces. After the process is over, the model is ready for a face swap.

Leave a Comment