China mulls legality of AI-generated voice used in audiobooks - eviltoast

A Beijing court will have to decide if an AI-generated voice, alleged to resemble a voiceover artist and used without her approval, has infringed on her right to voice.

The Beijing Internet Court on Tuesday began its hearing of a lawsuit filed by the artist, whose family name is Yin, claiming the AI-powered likeness of her voice had been used in audiobooks sold online. These were works she had not given permission to be produced, according to a report by state-owned media China Daily.

Yin said the entities behind the AI-generated content were profiting off the sale proceeds from the platforms on which the audiobooks were sold. She named five companies in her suit, including the provider of the AI software, saying their practices had infringed on her right to voice.

“I’ve never authorized anyone to make deals using my recorded voice, let alone process it with the help of AI, or sell the AI-generated versions,” she said in court. “I make a living with my voice. The audiobooks that use my AI-processed voice have affected my normal work and life.”

The defendants argued that the AI-powered voice was not Yin’s original voice and should be distinguished from the latter.

The court is scheduled to reveal its ruling at a later date, China Daily reported. Yin has sued for 600,000 yuan ($84,718) in financial losses and an additional 100,000 yuan for mental distress.

The legal case follows another last month when a Chinese court ruled in favor of a plaintiff, surnamed Li, who accused another of using an image he generated using an open source AI software, without his consent. Li had posted the picture on his personal social media account and argued that its unauthorized reuse infringed on his intellectual property rights.

In her defense, the defendant said the image had popped up via an online search and bore no watermark or information about its copyright owner. She added that she did not use the content for commercial gains. The image was used on her personal webpage, according to China Daily.

In its ruling, the Beijing Internet Court said Li had put in “intellectual investment” to tweak the image in line with what he wanted, including using keywords to generate the woman’s appearance and image lighting.

The court added that people who used AI features to produce an image still are the ones using a tool to create, stating that it is the person, rather than the AI, who invests intellectually in generating the image.

Li was reported to have used AI software Stable Diffusion to produce the image in question.

Commenting on the case, law firm King & Wood Mallesons said the Beijing court’s ruling appeared to contradict recent decisions in the US on whether AI-generated content could have copyrights. The firm pointed to cases such as “Zarya of the Dawn” and “Theatre D’opera Spatial” where US courts denied copyright protection to AI-generated content that lacked human authorship.

The law firm, though, noted a difference between the cases in China and the US, stressing the Beijing Internet Court ruling appeared to distinguish a “straightforward” AI-generated content that had no creative involvement, from one that demonstrated continuous human intervention to finetune the end-product. This involved adding prompts and tech parameters until the human creators got the result they wanted.

In the latter, the Beijing court had viewed the content as “AI-assisted” work in which Li had invested personal judgment and made aesthetic choices in producing the image, King & Wood Mallesons wrote. Li also demonstrated the ability to produce the same picture with the same sequence of instructions, comprising more than 150 prompts, and tech parameters.

“It would be interesting to speculate whether the [Beijing Internet Court] would come to the same conclusion, [in] recognizing the copyrightability of the AI picture, if the AI-generated content turns out to be unpredictable, producing various AI pictures each time,” noted the Hong Kong-based law firm. “Would the Chinese judges change their rationale because the human authors do not have ‘control’ in the AI-generated content output?”