Darth Vader's voice lines delivered in the hit series Obi-Wan Kenobi were generated by a cutting-edge artificial intelligence, which could change the landscape of voice acting in the future. As James Earl Jones retires from voicing Darth Vader, machine learning tools have been moving forward. AI tools have already changed many jobs, with some industries now able to automate much of their work. Voice actors may now be at risk of being replaced by these AIs too.

Machine learning tools have been steadily developing from logical tasks, such as video classification, to more creative endeavors. For example, you can now use DALL-E to create unique art. Artists have aired concerns about these algorithms as they allow for huge works to be generated at a low cost, effectively undercutting humans already working in a very competitive field. If the methods used to produce Vader’s speech in Obi-Wan Kenobi are cost-effective and capable of replicating other voices, this could create a similar scenario for voice actors.

Related: DALL-E Is Now Open To All Users: Here's How To Sign Up

The main protection voice actors have against being replaced by AI programs is the huge amount of data needed to produce a convincing model of the human voice. Other parts of the cinema industry can be more easily automated. Cinelytic, an AI tool used by Warner Bros., can make business decisions based on large amounts of data regarding box office successes and emerging trends in cinema. A Vanity Fair article describes the complex process Ukrainian AI start-up Respeecher used to generate audio for Darth Vader’s voice lines. It required vast amounts of data provided by Lucasfilm, as well as a team of researchers developing AI tools specifically to recreate one voice. While the expense is worthwhile to keep Vader's voice consistent, it is not undercutting traditional voice acting, yet.

Creating New AI Voices Is Even Harder

James-Earl-Jones-Darth-Vader

Respeecher has invested considerable time into its AI program, which replicates the voice of one specific actor. While Nvidia and Google are developing faster AI training processes, the additional complexity of creating a unique voice actor puts the cost of these tools far above the cost of hiring a human actor. The components of convincing human speech, from the emphasis on certain words to pauses for breath, are a hard problem for machines to solve. These are all guaranteed elements of traditionally captured voice acting. The difficulty in solving this problem is exacerbated by the lack of available data for companies developing AI voice actors.

Respeecher has the benefit of being employed by Lucasfilm to produce Darth Vader's voice lines. While the consent of Jones and Lucasfilm allows them access to many of Jones' existing works, they still cannot use the actor's voice taken from works owned by other production companies. Government intervention is also slowing the adoption of AI generated content, with a recent decision regulating that AI artistic creations won't qualify for a copyright.

The combination of the increased difficulty of producing new, unique voices and the lack of central ownership of existing voice data means that voice actors can enjoy job security for the considerable future without concern about being replaced by AI bots. The incredible work by Respeecher showcased in the Obi-Wan Kenobi series marks a step toward this problem, but it may be decades before a company produces an AI voice actor that can compete with humans, both in terms of cost and listening experience. As AI becomes more ubiquitous in the film industry, unions and rights groups must keep an eye on developments like these to ensure that artificial intelligence continues to only emulate villains like Vader, and not become them.

Next: Meta Gives First Look At Its Text-To-Video AI Generator

Source: Vanity Fair