Someone Can Clone Your Voice With AI. In Most States, That's Perfectly Legal.
In April 2023, an anonymous producer called Ghostwriter dropped a track on TikTok. It featured Drake and The Weeknd trading verses about Selena Gomez. The song racked up 11 million views and hundreds
One problem: Drake and The Weeknd had nothing to do with it.
Ghostwriter had used AI to clone their voices. The track was so convincing that listeners couldn’t tell the difference. Universal Music Group scrambled to get it taken down, and eventually succeeded. But the takedown wasn’t based on a clear legal right to the artists’ voices. It was a copyright claim over a producer tag buried in the track.
That workaround worked for Drake. But what about everyone else?
If you’re a musician, voice actor, podcaster, or anyone whose voice has value, here’s the uncomfortable truth: in most of the United States, someone can clone your voice with AI and there’s not much you can do about it.
The Patchwork Problem
The right to control how your voice and likeness are used commercially is called the “right of publicity.” It’s the legal principle that says you, not someone else, get to decide if your face appears on a billboard or your voice sells a product.
The problem is that right of publicity laws vary wildly from state to state. Some states have strong protections. Some have weak ones. And most of them were written long before anyone imagined you could generate a convincing vocal clone from a few minutes of audio samples.
Here’s what the landscape looks like:
Strong protection (explicit AI coverage): Tennessee passed the ELVIS Act in March 2024, which explicitly protects against AI voice cloning. It’s the first law in the country specifically designed for this. Violations can result in civil lawsuits and criminal prosecution as a Class A misdemeanor.
Moderate protection (voice included, but no AI-specific language): California has protected voice as part of publicity rights since the 1980s, thanks to cases like Bette Midler v. Ford Motor Company. But the law wasn’t written with AI in mind, so there’s uncertainty about how it applies.
Weak or no protection: Many states don’t explicitly include voice in their publicity rights at all. And even in states that do, most laws require the unauthorized use to be “commercial” in a traditional sense. AI-generated content doesn’t always fit neatly into that box.
The result is a patchwork where your rights depend entirely on where you live and where the person cloning your voice operates. If someone in a state with no voice protection clones your vocals and uploads the result, your legal options might be limited to nothing.
What Tennessee’s ELVIS Act Actually Does
Tennessee’s Ensuring Likeness Voice and Image Security Act (yes, named after Elvis) took effect on July 1, 2024. It’s worth understanding because it’s likely the template for future legislation elsewhere.
The ELVIS Act does a few key things:
Explicitly includes voice as a property right. The law defines “voice” broadly to include both an individual’s actual voice and any simulation of it. If it’s “readily identifiable and attributable” to you, it’s protected.
Covers AI-generated replicas specifically. The law targets “personalized generative AI cloning models and services that enable human impersonation.”
Creates liability for the tools, not just the content. You can sue not only the person who publishes an unauthorized voice clone, but also anyone who “makes available an algorithm, software, tool, or other technology” whose primary purpose is creating unauthorized voice replicas.
Carries real penalties. Civil lawsuits can seek injunctions, actual damages, and profits. Criminal violations are Class A misdemeanors, carrying up to 11 months and 29 days in jail and fines up to $2,500.
Has exemptions for legitimate use. News coverage, commentary, criticism, satire, and parody are protected. So is representing yourself as yourself in an audiovisual work, unless the work is designed to create a false impression that it’s authentic.
The music industry lobbied hard for this law. Luke Bryan, Chris Janson, Lindsay Ell, and other Nashville artists showed up at the signing ceremony. The bill passed the Tennessee legislature unanimously.
But Tennessee is just one state. And the AI tools that can clone your voice don’t respect state lines.
The Federal Void
There’s no federal right of publicity in the United States. That’s why you end up with 50 different state laws (and some states with no law at all).
Congress is trying to change that. The NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe) was first introduced as a draft in 2023, formally introduced in 2024, and reintroduced in April 2025. It would create a federal intellectual property right to your voice and likeness for the first time.
The bill has bipartisan support and backing from an unusual coalition: the music industry (UMG, Warner, RIAA, SAG-AFTRA) and the tech industry (YouTube, OpenAI, Google, Amazon, IBM) are both on board. That kind of alignment is rare.
What the NO FAKES Act would do:
Establish that every individual has a federal property right in their voice and visual likeness
Allow individuals (or their heirs, for 70 years after death) to control the use of digital replicas
Create liability for creating, distributing, or profiting from unauthorized AI-generated replicas
Set up a notice-and-takedown process similar to the DMCA for copyright
Include exemptions for news, commentary, parody, satire, and other First Amendment-protected speech
If passed, it would replace the state-by-state patchwork with a uniform national standard.
But the bill hasn’t passed yet. It failed to advance in 2024. It was reintroduced in April 2025 with more industry support, but as of now, there’s still no federal protection. And even if it does pass, the takedown mechanisms and enforcement details will matter a lot for how effective it actually is.
What This Means for Different Creators
If you’re a professional musician or voice actor:
Your voice is your primary asset. Until federal law catches up, your protection depends heavily on where you’re based. If you’re in Tennessee, you have explicit protection against AI cloning. If you’re in California, you have some protection, but it wasn’t written for AI. If you’re elsewhere, check your state’s publicity rights laws carefully.
Consider whether your contracts with labels, publishers, or platforms address AI voice cloning. Many older contracts don’t. Newer ones increasingly do, often with opt-in provisions for AI training or likeness licensing.
If you’re an independent creator building an audience:
You’re probably not thinking about voice cloning as a threat yet. But as your profile grows, so does the value of your voice. Someone could theoretically train an AI on your podcast episodes, YouTube videos, or social content and create synthetic versions of you.
Documentation matters. Keep records of your original content. If you ever need to prove that your voice was cloned without permission, having timestamps and originals helps.
If you’re using AI voice tools:
Be careful whose voice you’re cloning. Using AI to generate content that sounds like a real, identifiable person without their permission could expose you to liability, especially if you’re in Tennessee or if the cloned artist is.
The “Heart on My Sleeve” case showed how fast platforms will act when major labels are involved. But smaller artists have less leverage. That doesn’t mean the legal risk is zero.
The Practical Playbook
Until federal law provides clear protection, here’s what you can do:
1. Know your state’s law
Look up whether your state has a right of publicity statute, whether it includes voice, and whether it applies to AI-generated content. Tennessee’s ELVIS Act is the gold standard. California offers moderate protection. Many states offer very little.
2. Document your voice
Keep records of your original vocal work with timestamps. If you ever need to prove your voice was cloned, this documentation will matter.
3. Review your contracts
If you’re signing with labels, publishers, or platforms, look for clauses about AI, voice likeness, and digital replicas. Some newer deals include opt-in provisions for AI training. Understand what you’re agreeing to.
4. Monitor for unauthorized use
This is harder than it sounds. Unlike visual content, there’s no reverse-image search for voice. But keeping an eye on social media, YouTube, and streaming platforms for content falsely attributed to you is increasingly important.
5. Support federal legislation
The NO FAKES Act would solve a lot of these problems. If it matters to you, let your representatives know.
What I’m Building
I’ve spent the last six months deep in copyright and AI law for music. The voice cloning issue sits right at the intersection of what ClearVerse is about: helping creators understand their legal exposure before it becomes a problem.
Right now, ClearVerse is focused on copyright risk assessment. But the broader question is the same: what are your actual rights, and how do you protect them in a landscape that’s changing faster than the law can keep up?
We’re opening early access soon. Sign up at clearverse.ai to be first in line.
Want a walkthrough? If you’re a label, agency, or high-volume creator, I’d love to show you what we’re building. Just reply to this email or DM me on LinkedIn.
Or just subscribe. I’ll be writing more about AI, copyright, and creator economics.
One Last Thing
The Ghostwriter incident was a wake-up call. But it won’t be the last time someone clones a famous voice and goes viral. The tools are getting better. The barrier to entry is getting lower. And the legal framework still has massive gaps.
I want to hear from you: Has anyone ever used your voice or likeness without permission? Would you even know if they did?
Hit reply and let me know. I read every response.
Talk soon.
— Christian
P.S. — If you know a creator who hasn’t thought about voice cloning, send this their way. This is one of those things where the first time you learn about it shouldn’t be when it happens to you.

