AI Voice Clash in China: When Voice AI Agents Face the Court
In April 2024, Beijing’s Internet Court delivered China’s first ruling on AI voice infringement, establishing that a person’s recognizable voice is protected under personality rights and cannot be used commercially without explicit permission. Fast forward to December 2024, and another court in Chengdu reached the opposite outcome: in the so-called “Meng Shuai” case, it held that similarity of voice generated by a Voice AI Agent did not amount to infringement, dismissing claims by a voice artist.
What Happened: Two Conflicting Verdicts
- In the first case, the AI defendant admitted to using the plaintiff’s voice explicitly for training, so the voice was indisputably reproduced. The court judged that this violated personality rights unless the voice owner had consented.
- In the “Meng Shuai” case:
• The plaintiff was a professional voice actor (though not a mega-celebrity) claiming distinct voice recognition in his industry.
• He presented voiceprint similarity scores (88 %–95 %) between his original voice and the AI output.
• The defendant argued its AI model used diverse lawful training data, not the plaintiff’s samples.
• The court rejected most voiceprint evidence, declined to find public recognizability, and introduced a “duty of tolerance” for natural likenesses. Consequently, it dismissed all claims.
Why It Matters: The Rise of Non-Human Workers & Legal Uncertainty
These cases reflect the tensions emerging as AI systems—especially Voice AI Agents and other Non-Human Workers—become capable of mimicking human attributes like voice. The conflicting rulings expose deep legal uncertainty when AI replicates or re-generates human voice. Key issues include:
- Who bears the burden of proof over training data and whether voice similarity constitutes misuse.
- What standard of “recognizability” applies: industry insiders vs. the general public.
- Whether courts should tolerate “reasonable similarity” in a world where voice-cloning becomes technically trivial.
As AI agents gain autonomy (even approaching the realm of AI Employees), lawmakers and courts are scrambling to set rules on identity, privacy, and ownership.
Looking Ahead: What to Watch
- Will higher Chinese courts prefer one of these rulings as guiding precedent?
- How will legislation evolve to regulate voice cloning, licensing, and AI agents’ outputs?
- What’s the standard for protecting voice assets of non-celebrities in the age of synthetic voices?
These questions are not only vital for voice artists and tech firms—they go to the heart of how society defines identity, control, and rights when machines increasingly mimic humans.
Key Highlights:
- China’s first AI voice infringement ruling (April 2024) affirmed voice protection under personality rights.
- The 2024 “Meng Shuai” case (Chengdu court) rejected infringement claims, citing lack of public recognizability and tolerable similarity.
- Courts diverged on burden of proof, standards of recognition, and acceptance of AI voice resemblance.
- Rise of non-human workers like Voice AI Agents forces new legal challenges around identity, consent, and intellectual property.
- The verdicts underscore urgent need for clearer laws and standards governing voice cloning and AI agent behavior.
Reference:
https://m.chinanews.com/wap/detail/ecnszw/hevxaaq6000924.shtml