OpenAI is arguing with US actress Scarlett Johansson about an AI voice in the bot ChatGPT. Johansson thinks the bot sounds like her.OpenAI reacts – and “pauses” the voice.
AI-controlled chatbots can not only write, but also speak to users. They should sound more and more human and natural – that is the big goal of companies like OpenAI, the makers behind ChatGPT.
Last week, OpenAI presented updates to the chatbot -impressive, among other things, was how fluently and naturally the bot can now speak to users – and that it is able to read a story with different intonations, for example.
“Programmed by a man”
The female voice called Sky attracted a lot of attention and also ridicule. The reason, said comedienne Desi Lydic on the Daily Show, was that she sometimes came across as friendly and even very sexy. “It’s clearly programmed by a man. She has all the information in the world, but she seems to say: ‘But I don’t know anything! Teach me, Daddy…’”
Some Internet users said the voice resembled actress Scarlett Johansson. In the 2013 film “Her”, she voiced an artificial intelligence named Samantha – the plot of the film: a man, played by Joaquin Phoenix, falls in love with this AI.
Johansson’s lawyers contact OpenAI
Apparently the comparison is not too far-fetched, because now Scarlett Johansson herself has also spoken out: In a statement, Johansson says that OpenAI boss Sam Altman asked her last September to consider becoming one of the voices for ChatGPT. But she turned down the offer.
Now she has heard from friends and family members that the ChatGPT voice sounds a lot like her. Her lawyers have contacted the company to have the voice deleted.
Not the first lawsuit over voice AI
Sky is one of five voices that the company offers; there are also Breeze, Cove, Juniper, and Ember. Sky has been unavailable since Monday -OpenAI wrote on X, formerly Twitter, that this voice is being paused for the time being.
The post went on to say that Sky was not an imitation, but belonged to another professional actress, whose name they did not want to mention for privacy reasons. She was selected in a casting.
Voices can now be copied very easily with the help of AI -just recently a group of actors sued the AI company Lovo. The company allegedly used their voices without permission.
Suddenly Morgan Freeman can speak German
An Israeli start-up wants to replace voice actors for films or series with artificial intelligence – with software that digitally edits original voices.
It is quite unusual when the American actor Morgan Freeman, with his uniquely deep voice, suddenly speaks fluent German or Spanish. Itsounds as if the US Hollywood star had dubbed himself in the film versions for the respective countries. Now, in his 84th year, the Oscar winner has not usedthe Corona-related standstill of the film sets to learn various foreign languages. Rather, it is a so-called “deep fake” of his unmistakable voice, i.e. a digital edit – presented by the Israeli start-up”Deepdub”.
Brothers with intelligence experience
The company was founded in 2019 by brothers Ofir and NirKrakowski, who also helped set up the cyber sector of Israel’s domestic intelligence service Shin Bet. Both are enthusiastic film lovers. They find it disappointing when dubbed versions have to do without the actors’ distinctive original voices and instead present a voice-over version by local speakers.
Now they want to revolutionize the film and series market with the help of artificial intelligence. With the “deep learning”synchronization platform they have developed, production companies can transform content from one language into another. The software learns and trains with the help of various clips of the original voices until it is able to use the speech data to create an artificial voice that sounds like the original – just in the different national languages.
Dialects and accents also possible?
“Deepdub” is initially launching a service in German, English, Spanish and French. The start-up is not only promoting the fact that it improves the authenticity of productions and film enjoyment.Hollywood film distributors and streaming services should also be able to save money and time thanks to the artificial voices. Dubbing productions are expensive and often take months. The AI is supposed to do this work within a few weeks at a fraction of the cost.
The Krakowski brothers are also proud that their customers can choose whether the original actors and actresses speak the local languageperfectly or with an accent. For example, Morgan Freeman can speak “moltobene” like a native Roman for the Italian market, or Italian with an American accent. Theoretically, various dialects would also be possible. The actor himself has not yet commented on whether he would like to surprise his fans with Low German or Bavarian language skills in the future.
RECENTLY, ACTRESS SCARLETT JOHANSSON AND OTHER VOICE ACTORS HAVE BROUGHT TO ATTENTION THE NEED FOR LEGAL REGULATION IN THE FIELD OF VOICE ACTING.
Technology is evolving at a rapid pace thanks to artificial intelligence (AI). One area that’s seeing significant advances is voice technology, with AI-generated voices becoming more common in various applications such as virtual assistants, audiobooks, and customer service. However, this advancement is giving rise to legal concerns regarding the unauthorized use of people’s voices in AI.
The complex legal issues surrounding voice in AI involve various aspects. Copyright laws are relevant, but the more significant concern often lies in the Right of Publicity, which protects an individual’s control over the commercial use of their likeness, including their voice.
Some recent legal cases shed light on the challenges in this area:
Scarlett Johansson’s Lawsuit Against OpenAI
Actress Scarlett Johansson accused OpenAI of creating an AI voice for ChatGPT that sounded remarkably similar to hers. “When I heard the released demo, I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine,” Johansson expressed. OpenAI later issued an apology and suspended the “Sky” voice mode. This controversy underscores the importance of avoiding deliberate mimicry of celebrity voices and emphasizes the need for transparency and consent when using AI-generated voices.
LOVO’s Class Action Lawsuit
Voiceover actors Paul Skye Lehrman and Linnea Sage filed a class action lawsuit against AI startup LOVO, alleging that LOVO misappropriated their voices and those of other celebrities like Johansson, Ariana Grande, and Conan O’Brien. This case highlights the legal risks associated with utilizing AI voices without proper authorization. According to Pollock Cohen attorneys Steve Cohen and Anna Menkova, “LOVO claims to compensate voice actors. That may be true in some cases. But plaintiffs and other members of the class have received no revenue from the continued unauthorized use of their voices by LOVO and LOVO clients.”
Key Legal Issues in AI Voice Technology
Some of the main legal concerns regarding AI voice technology include:
Rights of Publicity
Performers have rights to their names, voices, and likenesses, even after death in many U.S. states, including New York. Unauthorized use of a performer’s voice could infringe on these rights. When an AI generates a voice that closely resembles a celebrity, questions arise about whether the AI is exploiting their likeness without permission.
Consumer Protection Laws
Misleading advertising and presenting something as another can result in legal action. AI-generated voices must not deceive consumers or misrepresent products or services. For instance, using an AI voice in a commercial without proper disclosure could violate consumer protection laws.
Guild and Union Agreements
Contracts between performers and studios often govern voice performances, outlining compensation, exclusivity, and other terms. When AI-generated voices are employed, studios and developers must consider compliance with existing contracts. If an AI voice mimics a unionized actor’s voice, disputes could arise.
The Future of Voice and the Law
These cases highlight the need for clearer legal frameworks surrounding the use of voices in AI. Some suggested solutions include:
“Right of Voice” Legislation
Several U.S. states are contemplating legislation that would grant individuals a specific “Right of Voice” alongside the Right of Publicity.
Transparency and Disclosure
Requiring developers to be transparent about AI-generated voices and obtain proper licensing could be a step forward.
Unauthorized use of voices in AI presents a complex legal challenge. As AI technology continues to advance, so too must the laws governing its use. By establishing robust legal frameworks that protect individual rights while fostering innovation, we can navigate this uncharted territory and ensure the ethical development of voice AI.
Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act explicitly includes a person’s voice as a protected property right for the first time, broadly defining “voice” to encompass both an individual’s “actual voice” and a “simulation” of the individual’s voice.
Violations of the ELVIS Act can lead to civil action enforcement and criminal enforcement as a Class A misdemeanor, which carries penalties of up to 11 months, 29 days in jail and/or fines up to $2,500.00.
Music labels with contracts with artists may seek remedies against wrongdoers under the ELVIS Act, which will be exclusive and limited to Tennessee residents when it goes into effect on July 1, 2024.
The proliferation of AI has caused growing concern among musicians, music industry leaders, and lawmakers, who have advocated for stronger protections for musicians’ copyrights and other intellectual property. This alert from Holland & Knight examines how the Ensuring Likeness Voice and Image Security (ELVIS) Act of 2024 (ELVIS Act) enhances protections for the name, image, likeness, and voice (NIL+V) of artists through artificial intelligence and explores additional safeguards and rights for artists that may be forthcoming.
The ELVIS Act states that every individual holds a property right in the use of their NIL+V in any medium and in any manner, including use in songs, documentaries, films, books, and social media posts (e.g., Tik Tok, Instagram), among other platforms.
The Tennessee General Assembly has provided a summary and the complete text of the ELVIS Act.
Significance of the ELVIS Act
The advancing capabilities of AI have outstripped regulators’ ability to define boundaries around AI usage in various industries. Legislators are keen to address current issues and anticipate new challenges related to the use of AI technology to replicate or imitate individuals, particularly in diverse entertainment sectors.
Protection for Recording Artists: AI voice synthesis technology has made recording artists susceptible to highly convincing impersonations known as “voice clones,” which could potentially confuse, offend, defraud, or deceive their fans and the general public. The use of voice clones could devalue a recording artist’s unique talent by mass-producing music featuring an AI approximation of the artist’s voice. For artists, Tennessee’s new law establishes a basis for them to receive explicit protection over their voices for the first time, in addition to the standard name, image, and likeness (NIL) rights.
Protection for Voice Actors, Podcasters, and Others: While much attention has been focused on its potential impact in the music industry and voice cloning of famous artists, the ELVIS Act also safeguards podcasters and voice actors, regardless of their level of renown, from the unjust exploitation of their voices, such as by former employers after they have left the company. Individuals have a new tool to protect their personal brands and ensure the enduring value of their voice work.
Path to the Present
An episode from the 2019 HBO anthology series “Black Mirror” (“Rachel, Jack and Ashley Too”) anticipated the concerns confronting artists today: the use of their voices to create and release new content without their control or approval. These concerns have only heightened as AI technologies have become more sophisticated and capable of producing deep fakes and voice clones that are nearly indistinguishable from the originals.
In the wake of the recent controversial release of the alleged “Fake-Drake” song “Heart on My Sleeve” by Ghostwriter (a TikTok user), who utilized AI technology to produce the song without consent, the issue of AI voice cloning has become a prominent topic. To underscore this growing issue, since shortly after the release of the “Fake-Drake” song, numerous music business executives have been urging for legislation to regulate AI in the music industry.
Support and Concerns
Prior to its enactment, the bill that later became the ELVIS Act was extensively discussed in both House and Senate committee hearings. The music industry broadly supported the bill in these hearings, and local talents, including Luke Bryan, Chris Janson, Lindsay Ell, Natalie Grant, and others, expressed their support for the bill. However, members of the film and TV industry raised worries that the “right to publicity” protections included in the ELVIS Act would unduly restrict the production of movies and shows by, for instance, imposing an excessive burden to obtain the necessary approvals or permissions to use an individual’s name, image, voice, or likeness. Despite their objections, the bill garnered unanimous support from Tennessee legislators in all relevant committees and on the House and Senate floors (30-0 in the Senate and 93-0 in the House).
The ELVIS Act was approved on March 21, 2024, without substantial revision and with substantial enthusiasm from prominent members of the Nashville music community.
Fundamental Aspects of the ELVIS Act
The ELVIS Act revises Tennessee’s existing Personal Rights Protection Act (PPRA) of 1984, which was enacted in part to extend Elvis Presley’s publicity rights after his death in 1977. The PPRA forbade the use of a person’s name, image, or photograph solely “for purposes of advertising” and permitted both civil and criminal actions for breaches; however, it did not extend protections to the use of a person’s voice.
Most notably, the ELVIS Act incorporates an individual’s actual or simulated “voice” to the list of personal attributes already safeguarded by the PPRA. It also amends the PPRA in three significant ways:
1. An individual can be held accountable in a civil lawsuit and charged with a Class A misdemeanor if they:
– Share, perform, distribute, transmit, or otherwise make public an individual’s voice or likeness with the knowledge that the use of the voice or likeness was not authorized by the individual, or by a person with the appropriate authority in the case of minors or deceased individuals.
– Share, transmit, or otherwise make available an algorithm, software, tool, or other technology primarily intended to produce an identifiable individual’s photograph, voice, or likeness with the knowledge that sharing or making available the photograph, voice, or likeness was not authorized by the individual or by a person with appropriate authority in the case of minors and the deceased.
2. A person or entity with exclusive rights to an individual’s personal services as a recording artist or the distribution of sound recordings capturing an individual’s audio performances can take legal action against unauthorized use on behalf of the individual.
3. Use of an individual’s name, photograph, voice, or likeness can be considered fair use under copyright law if:
– It is used in connection with news, public affairs, or sports broadcasts for comment, criticism, scholarship, satire, or parody purposes.
– It is a portrayal of the individual in an audiovisual work, except if the work creates a false impression that the individual participated in it, and the use is fleeting or incidental.
– It appears in an advertisement or commercial announcement related to news, public affairs, sports broadcasts, or audiovisual works. Violations of the ELVIS Act can result in civil and criminal enforcement as a Class A misdemeanor, carrying penalties of up to 11 months, 29 days in jail and/or fines up to $2,500.
State-to-state protections for name, image, and likeness rights of publicity vary across the U.S. Approximately 39 states have passed or proposed Name, Image, and Likeness (NIL) legislation. Tennessee’s ELVIS Act is not the first to include protection for an individual’s voice (NIL+V), as California has longstanding NIL+V protections in place, but it is the first to expressly protect against uses of AI to infringe on an individual’s rights to their own NIL+V.
The federal government is also working on solutions to address concerns about publicity rights. In January 2024, a bipartisan group of House legislators introduced the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act (No AI FRAUD Act) to protect one’s voice and likeness, building upon the Senate’s draft bill, the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act), which was introduced in October 2023.
Although the NO AI FRAUD ACT aims to establish broader and more synchronized protections on the federal level, artists living in states with stronger protections than the No AI FRAUD ACT may prefer seeking redress under state law.
“Publicly available” does not automatically mean “free to share without repercussion.” Avoid copying, promoting, or circulating anything related to an individual’s name, image, likeness, or using the individual’s voice or a simulation of their voice without consent.
Seeking permission or obtaining a license can reduce the risk of potential infringement claims, especially for commercial uses. Stay updated on developments in NIL+V law, as the ELVIS Act applies only to Tennessee residents, and other states may introduce similar legislation.
AI will undoubtedly influence the future of the arts and music industry as its technology advances. For more information about the ELVIS Act or questions about potentially violating an artist’s publicity rights using AI and safeguarding name, image, likeness, and voice rights, please contact the authors. Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act explicitly includes a person’s voice as a protected property right for the first time, broadly defining “voice” to include both an individual’s “actual voice” and a “simulation” of it.
Infractions of the ELVIS Act can be pursued through both a civil lawsuit and criminal prosecution as a Class A misdemeanor, which can result in penalties of up to 11 months and 29 days in jail and/or fines up to $2,500.00.
Record labels with agreements with musicians may pursue legal actions against those who violate the ELVIS Act, which becomes effective on July 1, 2024, and will only apply to residents of Tennessee.
The increasing use of artificial intelligence (AI) has raised concerns among artists, music industry leaders, and lawmakers, who have advocated for stronger protections for musicians’ copyrights and other intellectual property. This alert from Holland & Knight delves into how the Ensuring Likeness Voice and Image Security (ELVIS) Act of 2024 (ELVIS Act) expands protections for artificial intelligence related to artists’ name, image, likeness, and voice (NIL+V) and explores potential additional safeguards and rights for artists.
The ELVIS Act states that every person holds property rights in the use of their NIL+V in any form and manner, including in songs, documentaries, films, books, and social media platforms such as TikTok and Instagram, among others.
The Tennessee General Assembly has provided a summary and the complete text of the ELVIS Act.
The Significance of the ELVIS Act
The rapid advancements in AI have surpassed regulators’ ability to establish limits on its use across various sectors. Legislators are keen to address existing issues and anticipate new challenges related to the use of AI to mimic or impersonate individuals, particularly in the entertainment industry.
Protection for Musicians: The emergence of AI voice synthesis technology has exposed musicians to potentially convincing impersonations known as “voice clones,” which could deceive, offend, defraud, or mislead their audience and the public. The use of voice clones may devalue a musician’s unique talent by mass-producing music using an AI imitation of the artist’s voice. For musicians, Tennessee’s new law establishes a foundational protection over their voices for the first time, in addition to the standard name, image, and likeness (NIL) rights.
Protection for Voice Actors, Podcasters, and Others: While there has been significant focus on its potential impact in the music industry and voice cloning of renowned artists, the ELVIS Act also safeguards podcasters and voice actors, irrespective of their level of fame, from the unfair exploitation of their voices, such as by former employers after they have left the organization. Individuals have a new legal recourse to safeguard their personal brands and ensure the ongoing value of their voice work.
How We Arrived Here
An episode of the futuristic HBO series “Black Mirror” (“Rachel, Jack and Ashley Too”) in 2019 foreshadowed the current concerns facing artists: the use of their voices to create and release new content without their control or approval. These concerns have escalated as AI technologies have become more advanced and capable of producing deep fakes and voice clones that are almost indistinguishable from the genuine article.
Following the contentious release of the alleged “Fake-Drake” track “Heart on My Sleeve” by Ghostwriter, a TikTok user who used AI technology to compose the song without consent, the issue of AI voice cloning has become a hot topic. Furthermore, since the release of the “Fake-Drake” track, numerous music industry executives have advocated for laws to regulate AI in the music sector.
Support and Concerns
Prior to its enactment, the bill that became the ELVIS Act was extensively debated in both House and Senate committee hearings. The music industry broadly supported the bill during these hearings, and local talents, including Luke Bryan, Chris Janson, Lindsay Ell, Natalie Grant, and others, vocally endorsed the legislation.
However, members of the film and TV industry raised objections that the “right to publicity” protections outlined in the ELVIS Act could unduly impede the production of movies and shows by, for example, imposing an unreasonable burden to obtain the necessary approvals or permissions for using an individual’s name, image, voice, or likeness. Despite their objections, the bill received unanimous backing from Tennessee legislators in all relevant committees and in both the House and Senate (30-0 in the Senate and 93-0 in the House).
The ELVIS Act was ratified on March 21, 2024, without significant modification and was met with considerable enthusiasm from prominent figures in the Nashville music community.
Important Elements of the ELVIS Act
The ELVIS Act modifies the Personal Rights Protection Act (PPRA) of 1984 in Tennessee, which was enacted to prolong Elvis Presley’s publicity rights after his death in 1977. The PPRA prohibited the use of a person’s name, image, or likeness solely for advertising purposes and allowed for civil and criminal actions in case of violations. However, it didn’t cover the use of a person’s voice.
The ELVIS Act specifically introduces an individual’s actual or simulated “voice” as a newly protected characteristic under the PPRA. It makes three primary amendments to the PPRA:
1. An individual can be held liable in a civil action and could be guilty of a Class A misdemeanor if they: publish, perform, distribute, transmit, or otherwise make an individual’s voice or likeness available to the public, knowing that the individual did not authorize the use of their voice or likeness, or in the case of minors and the deceased, a person with appropriate authority; distribute, transmit, or make available an algorithm, software, tool, or other technology, service, or device primarily designed to produce a specific individual’s photograph, voice, or likeness, knowing that making it available was not authorized by the individual, or in the case of minors and the deceased, a person with appropriate authority.
2. An individual or entity, such as a music label, holding exclusive rights to a) an individual’s personal services as a recording artist or b) the distribution of sound recordings capturing an individual’s audio performances, can initiate legal action and seek remedies against offenders on behalf of the individual.
3. The use of an individual’s name, photograph, voice, or likeness is explicitly considered a fair use under copyright law, to the extent protected by the First Amendment, if used: in connection with any news, public affairs, or sports broadcast or account; for comment, criticism, scholarship, satire, or parody; as a representation of the individual in an audiovisual work unless the work creates a false impression that the individual participated; or fleetingly or incidentally in an advertisement or commercial announcement for any of the preceding purposes.
Violations of the ELVIS Act can be prosecuted through a civil lawsuit and as a Class A misdemeanor, carrying penalties of up to 11 months and 29 days in jail and/or fines of up to $2,500.
State Protections
The “right of publicity” protections for name, image, and likeness (NIL) differ from state to state in the U.S., making it difficult to enforce an individual’s ownership over their name, likeness, and voice. Around 39 states have passed or proposed NIL legislation. Tennessee’s ELVIS Act is not the first to incorporate protection for an individual’s voice (NIL+V); California has long-established NIL+V protections. However, it is the first to explicitly safeguard against the use of AI to violate an individual’s rights to their own NIL+V.
Federal Protections Underway
The federal government is also working on addressing concerns related to publicity rights. In January 2024, a bipartisan group of House legislators introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (No AI FRAUD Act), which aims to establish a federal framework for protecting one’s voice and likeness, while outlining First Amendment protections. This builds on the Senate’s NO FAKES Act, a draft bill introduced in October 2023.
While the NO AI FRAUD ACT aims to establish broader federal protections, artists in states with stronger protections may find it prudent to seek redress under state law.
Avoiding Violations of Individual Rights
“Publicly available” does not imply “free to share without consequences.” Do not copy, promote, or circulate anything related to a person’s name, image, likeness, or voice without consent or outside the realm of First Amendment protections.
Seeking permission or obtaining a license helps mitigate the risk of potential infringement claims, particularly for commercial use. If obtaining consent is impractical or unnecessary, seeking legal advice is advisable.
Stay informed about developments in NIL+V law. While the ELVIS Act applies only to Tennessee residents, other states may enact similar legislation.
AI’s role in shaping the future of the arts, particularly the music industry, will undoubtedly grow as AI technology advances. If you have questions about the ELVIS Act or if you want to know whether your use of AI might infringe on an artist’s right to publicity, or how to protect your name, image, likeness, and voice rights, please reach out to the authors.
Understanding AI Voices and Their Legality
The world is vast and fascinating, brought to life through voice replication technology using advanced AI models trained on human speech. Collaboration among various AI labs has enabled us to create realistic digital experiences with these voices, which are used for gaming, streaming services, and other conversational applications.
As the prevalence of AI-based vocalizations grows, there have been raised ethical and legal considerations, sparking a debate about their place in today’s society.
The Development of AI Voices
AI’s development of voices using voice replication technology is now a reality, utilizing deep learning, machine learning algorithms, and neural networks.
This process involves training AI speech models with human speech samples to mimic lifelike speech sounds that accurately reflect human speech.
Exposing these models to various human voices allows them to produce digital vocalizations with lifelike qualities comparable to natural tones.
Legal Aspects of AI Voice Usage
Regarding AI voices, specific regulations may be necessary depending on the particular context and location. For example, utilizing a prominent figure’s voice without consent might result in legal consequences.
If using copyrighted material to generate AI-based sound, regulations may limit the free use of this audio content for vocalization.
Many countries’ existing laws have yet to provide sufficient protection against potential issues regarding AI-based audio content creation tools, and the technology’s rapid evolution makes it challenging to implement new legislation.
Factors Impacting AI Voice Legality
As AI technology and voice services advance, ongoing monitoring of legal issues such as copyright infringement or intellectual property rights is necessary to ensure responsible use.
For example, using AI-generated voice-overs without the creator’s permission could be unlawful. It’s important for users of these voices to be mindful of potential consequences that may arise from not following applicable laws.
Regulating AI Voices: Current Laws and Future Trends
As the technology becomes increasingly popular, current laws are being scrutinized to assess whether they adequately address this new phenomenon. This has led governments and legislators to explore the development of regulations specifically tailored for these types of artificial technology.
When considering potential regulations, various international perspectives should be taken into account in decision-making. Understanding the responses of different countries is a vital part of creating sound legislation regarding virtual vocalizations originating from AI sources.
Existing Laws and Regulations
This technology’s development has sparked the need for new legal frameworks to address associated issues. For instance, the California AI Accountability Act was introduced to “encourage continued innovation while ensuring the rights and opportunities of all Californians are protected.” Among the proposed regulations are provisions that “would require California state agencies to notify users when they are interacting with AI.” It recognizes the potential benefits of generative AI while also addressing potential misuse of the technology.
Despite existing and developing laws, it may not be sufficient to cover all aspects that arise when dealing with voice recognition systems due to the unique challenges posed by this type of technology.
Potential New Regulations and Legislation
Given the recent advancements in AI voice technology, adapting legal frameworks to ensure responsible and ethical use is critical.
Legislators are contemplating new laws and enacting regulations to address the unique issues caused by this technology. Some bills address discrimination resulting from using AI, while others focus on its applications.
International Perspectives on AI Voice Regulation
Different countries may have varying regulations for controlling AI voice technology. Some may be very strict in their regulations, while others may take a more lenient stance on the issue. Regardless of the policy, it is essential to establish appropriate standards for managing generative voice and AI voice technology to protect individuals and businesses and ensure responsible use across nations.
With these guidelines in place, safety surrounding the use of AIs employing voice recognition can become more standardized across different countries.
AI Voice Cloning: Ethical Concerns and Legal Implications
The use of voice cloning technology raises numerous moral issues and potential legal ramifications, including potential abuse or use for impersonation or deception.
Certainly! It is crucial to consider all ethical aspects associated with AI voice and related technologies while taking into account how to minimize their potential negative impact on our society.
Ethical Considerations
When utilizing this technology, ethical considerations, such as privacy and consent, must be considered. Unauthorized use of someone’s voice without their permission can lead to identity theft or other malicious activities that violate an individual’s right to privacy.
Concerns regarding ownership are also important when using another person’s vocal sound without their consent. Therefore, the ethical implications of this technology must be carefully examined.
Legal Consequences of Voice Cloning Misuse
Misusing voice cloning technology can result in legal consequences for both users and AI providers, including defamation, copyright infringement, impersonation, or privacy violations.
Those using cloned voices must ensure compliance with relevant laws and ethical regulations related to the use of this technology.
Protecting Against Voice Cloning Misuse
Misuse of voice cloning could be addressed by implementing legal measures, such as explicit provisions related to voice replication and extending the coverage of copyright laws. This would offer individuals and organizations better protection against the risks posed by this technology.
By introducing features like false light protection in addition to voice copyrights, individuals can protect themselves more effectively against the harm associated with voice cloning abuse.
AI Voices in Specific Industries: Challenges and Regulations The use of AI voices in various sectors, such as entertainment, healthcare, insurance, and government agencies, presents several potential legal issues.
For instance, in the entertainment industry, complying with specific regulations is necessary when creating characters using generative AI.
For government services involving voice interactions between officials and citizens, other relevant laws must be respected.
In healthcare, it is important to consider access rights when enforcing regulations on the use of AI-generated voice to safeguard people’s confidential information. Understanding human interaction is crucial in this process.
AI Voices in Entertainment and Media
Adhering to the appropriate laws and regulations is essential when using AI voices in entertainment to avoid potential legal complications related to intellectual property rights. For instance, utilizing an AI-generated voice replicated without consent from a well-known actor or singer could lead to potential repercussions for those involved. It is important to strictly abide by relevant rules when using AI voices in this industry.
AI Voices in Healthcare and Insurance
AI voices are raising concerns in the healthcare and insurance sectors, particularly regarding data collection. Regulators have raised questions about security, privacy, and potential bias when it comes to AI-powered decision-making.
To ensure the responsible and ethical use of AI voices for the benefit of these industries, compliance with applicable regulations is necessary, covering both data handling and the voice technologies themselves.
Use in Government and Public Services
Regulations governing AI voices used by the government must be followed to uphold democratic values and integrity. Those utilizing such technology in public services or government activities must adhere to laws and relevant guidelines to maintain trust from citizens and accountability at large. The responsible use of these voices will help ensure their ethical use within these areas without bias.
Creating Your Own AI Voice: Legal Considerations and Best Practices
To develop AI voices responsibly, users must adhere to specific legal requirements and best practices. This helps them avoid issues related to infringement or misuse of their creations. Guidelines exist for both the development and proper use of these AI voices by consumers.
By following these regulations and recommended strategies, AI voice owners can ensure that their use is conducted ethically, encompassing all aspects of content production and usage surrounding this technology.
Legal Requirements for AI Voice Creation
AI voices are subject to stringent legal requirements, such as obtaining consent and protecting intellectual property rights.
Users should ensure that they do not violate any copyrights or trademarks and that the computer-generated voice is used for legitimate purposes. It is vital to be aware of these laws when creating an AI vocal output to avoid the consequences of non-compliance with AI usage regulations.
Avoiding Infringement and Misuse
To steer clear of potential legal complications, creators should be cautious when using copyrighted materials or replicating well-known personalities. One potential solution is to obtain permission from the original voice actor and enlist a different person.
Organizations may consider using voice recognition technology to ensure that their AI Voices do not violate copyright rules and intellectual property rights.
Responsible AI Voice Development and Usage
Developers of AI voices should follow best practices to ensure responsible and ethical use. The voices should be fair, address privacy concerns, and provide clear explanations for each action taken, always prioritizing user well-being. Security requirements should not be neglected when designing these AI voices.
Summary
AI-generated voices present various possibilities and challenges that require our attention and careful consideration. Understanding the ethical and legal aspects of AI voice generation is crucial for individuals, organizations, and governments to use it effectively and responsibly, ensuring a positive future for this advancing technology.
Frequently Asked Questions
Learning about the legal and ethical dimensions is essential for those who want to create or use this technology. This FAQ answers common questions about the legality, usage, and development of digital conversations. For a quick overview of how such technology needs to be approached legally and ethically, this guide serves as an ideal reference point.
AI technologies are advancing every day, making it important for individuals to become knowledgeable about their potential implications when used through vocally automated interaction systems.
Is it illegal to replicate a voice?
Replicating a human voice can lead to legal issues as it may violate copyright or intellectual property rights. To avoid any problems, obtaining the individual’s consent is crucial and all AI-generated voices must be created in compliance with data privacy regulations and personal protection laws. It is important to remain mindful of the potential consequences associated with creating an artificial version of someone’s voice while ensuring that every step aligns strictly with existing legislation concerning AI technology and sound recordings.
Is AI voice replication legal?
When it comes to AI voice replication, regulations have not yet been established, and the legality of this technology is uncertain. It could be considered illegal if used for deceptive purposes. The use of AI to replicate someone’s voice needs to be regulated legally and ethically.
Can AI voice be used in a song?
AI technology can be used to create new music and songs. Using AI voice models and synthesizing melodies, harmonies, and lyrics allows for a unique sound and tone created by this advanced technology. The technology should only be used with the explicit consent of any artists whose voices are utilized, and they should receive compensation.
Can AI voice be used for commercial purposes?
While it is simpler to use this technology for non-commercial purposes, commercial use involves more legal implications. If you want to create derivative songs, permission must be obtained from the artist whose voice was used.
Are there any regulations on AI yet?
As of now, there is no comprehensive legal framework for AI or data protection at the national level in America. Certain states, like California, have taken steps to pass laws and regulations related to AI.
Can you be sued for using an AI voice?
Misuse or copyright infringement can lead to legal consequences. Examples of these repercussions include defamation, false light, or fraudulent activity involving impersonation. To prevent such issues, users should ensure that they comply with laws on AI use and uphold ethical standards when using these AI voices in any way.
How much does it cost to create a clone of your own voice?
The cost of creating a voice clone depends on the technology and resources used. To determine the best option for your needs, research various providers and their pricing models for voice cloning technologies.
How much does it cost to create an AI voice with exclusive rights?
Creating an AI voice with exclusive rights can be costly due to legal agreements and unique datasets required for this technology. While a significant investment, it provides companies with exclusive access to their desired product. Data from various sources must also be collected along with necessary legal contracts for the endeavor to succeed. All these combined factors contribute to the significant cost associated with exclusive, advanced AI voices.
Is AI voice-over permitted on YouTube?
Users should be careful when using AI voice-overs on YouTube, as it could involve copyright and intellectual property issues. Care must be taken to ensure that these voices do not violate any existing copyright laws or trademarks or are used for illegal activities.
Is creating a deep fake legal?
To avoid any legal issues, it is essential to ensure that no existing copyrights or trademarks are infringed upon when using deep fakes, while also ensuring they are not used for illicit activities. It’s also important to recognize the potential ethical implications of the technology.
Can artificial intelligence imitate anyone’s voice?
Using AI, it is possible to replicate anyone’s voice, which may give rise to legal and ethical concerns. Any voice generated using AI technology should not violate existing copyrights or trademarks, or be used for illegal purposes.
Are synthetic voices derived from actual people?
Human voices play a crucial role in training AI voice models. A digital replica of a well-known individual’s voice can be created by capturing a recording and employing AI to produce a nearly realistic audio experience for various applications. These AI-generated voices have diverse applications, from virtual assistants to automated systems.
Will Scarlett Johansson pursue legal action against OpenAI for creating a voice assistant that mimics the character she portrayed in the 2013 film “Her,” which tells the story of a man’s romantic relationship with an AI?
This situation could arise after Johansson indicated that OpenAI attempted to recruit her to provide the voice for an AI assistant for ChatGPT, and when she declined, proceeded to develop a similar-sounding voice. OpenAI’s co-founder and CEO, Sam Altman, could potentially be a target in such a lawsuit.
Legal analysts suggest that Johansson might have a strong and convincing case in court if she chooses to take legal action, referencing a long history of previous cases that could lead to significant financial penalties for one of the industry’s leading AI firms and raise concerns about the sector’s preparedness to address AI’s various complex issues.
OpenAI’s apparent unawareness of this legal precedent, or potentially being willfully neglectful, emphasizes criticisms regarding the lack of regulation in the AI field and the necessity for better safeguards for creators.
OpenAI has not promptly replied to a request for comment.
OpenAI’s potential legal exposure
Legal experts indicate there are two types of law that could apply in this case, although only one is likely to be relevant based on the details currently available.
The first pertains to copyright law. If OpenAI had directly sampled Johansson’s films or other published materials to develop Sky, the playful voice assistant introduced in an update to ChatGPT, they might face copyright issues, assuming they didn’t obtain prior authorization.
That doesn’t seem to be the situation, at least according to OpenAI’s previous claims. The organization asserts that it did not utilize Johansson’s actual voice, as stated in a blog post, but instead employed “a different professional actress using her own natural speaking voice.”
While this might suffice to mitigate a copyright claim, it would likely not protect OpenAI from the second type of law that is relevant, according to Tiffany Li, a law professor specializing in intellectual property and technology at the University of San Francisco.
“It doesn’t matter if OpenAI used any of Scarlett Johansson’s actual voice samples,” Li noted on Threads. “She still has a valid right of publicity case here.”
Understanding publicity rights laws
Many states have laws concerning the right of publicity that shield individuals’ likenesses from being exploited or used without consent, and California’s law—where both Hollywood and OpenAI are situated—is among the most robust.
The legislation in California forbids the unauthorized use of an individual’s “name, voice, signature, photograph, or likeness” for the purposes of “advertising or selling, or soliciting purchases of, products, merchandise, goods or services.”
In contrast to a copyright claim, which relates to intellectual property, a right-of-publicity claim focuses more on the unauthorized commercialization of a person’s identity or public persona. In this scenario, Johansson could argue that OpenAI illegally profited from her identity by misleading users into believing she had provided the voice for Sky.
One possible defense OpenAI could present is that their widely circulated videos showcasing Sky’s features were not technically created as advertisements or intended to induce sales, according to John Bergmayer, legal director at Public Knowledge, a consumer advocacy organization. However, he also indicated that this might be a rather weak argument.
“I believe that usage in a highly publicized promotional video or presentation easily satisfies that requirement,” he stated.
In addition to claiming it never used Johansson’s actual voice and that its videos were not advertisements, OpenAI could assert that it did not aim to precisely replicate Johansson. However, there is considerable legal precedent—and one very inconvenient fact for OpenAI—that undermines that defense, according to legal professionals.
A precedent involving Bette Midler
There are approximately six or so cases in this area that illustrate how OpenAI may find itself in trouble. Here are two of the most significant examples.
In 1988, singer Bette Midler successfully sued Ford Motor Company over a commercial featuring what sounded like her voice. In reality, the jingle in the advertisement had been recorded by one of Midler’s backup singers after she declined the opportunity to perform it. The similarities between the imitation and the original were so remarkable that many people told Midler they believed she had sung in the commercial.
The US Court of Appeals for the 9th Circuit ruled in favor of Midler.
“Why did the defendants ask Midler to sing if her voice was not of use to them?” the court articulated in its ruling. “Why did they carefully seek out a sound-alike and instruct her to imitate Midler if Midler’s voice was not of value to them? What they sought was a quality of Midler’s identity. Its worth was what the market would have paid for Midler to have performed the commercial in person.”
In a related case decided by the 9th Circuit in 1992, singer Tom Waits received $2.6 million in damages against snack food company Frito-Lay over a Doritos advertisement that featured an imitation of Waits’ distinctive raspy voice. In that instance, the court reaffirmed its decision in the Midler case, further establishing the notion that California’s right of publicity law protects individuals from unauthorized exploitation.
The scenario involving Johansson and OpenAI closely mirrors previous cases. Johansson claims that OpenAI contacted her to voice the character Sky, which she declined. Months later, however, OpenAI launched a version of Sky that many compared to Johansson, leading her to say that even her “closest friends … could not tell the difference.”
The success of OpenAI in facing a potential publicity rights lawsuit may depend on their intent — specifically, whether the company can demonstrate it did not aim to replicate Johansson’s voice, according to James Grimmelmann, a law professor at Cornell University.
In a blog post on Sunday, OpenAI asserted that Sky was “not an imitation of Scarlett Johansson,” emphasizing that the goal of its AI voices is to create “an approachable voice that inspires trust,” one characterized by a “rich tone” that is “natural and easy to listen to.”
On Monday evening, Altman issued a statement in response to Johansson’s remarks, asserting that the voice actor for Sky was engaged before any contact was made with Johansson and expressed regret for the lack of communication.
However, OpenAI may have compromised its position.
“OpenAI could have had a credible case if they hadn’t spent the last two weeks suggesting they had essentially created Samantha from ‘Her,’” Grimmelmann noted, referring to Johansson’s character from the 2013 film. “There was significant public recognition tying Sky to Samantha, and that was likely intentional.”
The numerous comparisons made by users to Johansson were further emphasized when Altman shared a post on X the day the product was announced: “her.” Johansson’s statement indicated that Altman’s post insinuated that “the similarity was intentional.” Less than a year ago, Altman commented to audiences that “Her” was not only “incredibly prophetic” but also his favorite science-fiction film.
When viewed together, these elements imply that OpenAI may have intended for users to implicitly connect Sky with Johansson in ways that California’s law tends to prohibit.
Altman’s post was described as “incredibly unwise” by Bergmayer. “Considering the circumstances here — the negotiations, the tweet — even if OpenAI was utilizing a voice actor who merely sounded like Johansson, it still poses a substantial likelihood of their liability.”
Lost in deepfake translation, the situation involving Johansson exemplifies the potential pitfalls of deepfakes and AI. While California’s publicity law safeguards all individuals, certain state statutes protect only celebrities, and not all states have such laws.
Moreover, existing laws may safeguard an individual’s image or voice but may not encompass some of the capabilities offered by AI, such as instructing a model to recreate art “in the style” of a famous artist.
“This case illustrates the necessity for a federal right to publicity law, given that not every situation will conveniently involve California,” Bergmayer stated.
Some technology companies are stepping in. Adobe, the creator of Photoshop, has advocated for a proposal termed the FAIR Act, aimed at establishing a federal safeguard against AI impersonation. The company contends that while it markets AI tools as part of its creative software, it has a vested interest in ensuring its customers can continue to benefit from their own work.
“The concern among creators is that AI could undermine their economic survival because it is trained on their work,” stated Dana Rao, Adobe’s general counsel and chief trust officer. “That’s the existential worry faced by the community. At Adobe, we commit to providing the best technology to our creators while advocating for responsible innovation.”
Certain US lawmakers are drafting proposals to tackle the issue. Last year, a bipartisan group of senators introduced a discussion draft of the NO FAKES Act, a bill aimed at safeguarding creators. Another proposal in the House is known as the No AI Fraud Act.
However, digital rights advocates and academics have cautioned that this legislation is far from ideal, leaving significant loopholes in certain areas while also potentially creating unintended consequences in others.
Numerous concerns arise about safeguarding free expression, such as the extent to which individuals can utilize others’ likenesses for educational or other non-commercial purposes, as well as the rights concerning a person’s image posthumously — which is particularly relevant in recreating deceased actors in films or music and could ultimately disadvantage living performers, as noted by Jennifer Rothman, an intellectual property expert and law professor at the University of Pennsylvania.
“This creates opportunities for record labels to cheaply produce AI-generated performances, including those of deceased celebrities, and take advantage of this lucrative option over costlier performances by living individuals,” Rothman wrote in a blog post in October regarding the NO FAKES Act.
The ongoing discussion about publicity rights in Congress is part of a much larger initiative by lawmakers to grapple with AI, an issue that is unlikely to find resolution in the near future — reflecting the complexities involved.
Leave a Reply