97% of People Can't Tell AI Music from Human Music. That's Not the Problem.
Last November, Deezer and Ipsos ran a study across eight countries with 9,000 participants. They played people three songs and asked them to identify which ones were AI-generated. 97% got it wrong.
The headlines wrote themselves. “AI Fools Listeners.” “Humans Can’t Tell the Difference.” The implication was clear: AI music has won. It’s indistinguishable. Game over for human musicians.
But here’s what those headlines missed: the inability to detect AI music isn’t the crisis. The crisis is what happens after the music gets uploaded.
The Real Story Behind the 97%
Let’s be clear about what the Deezer study actually found.
Yes, 97% of people failed a blind listening test. But dig into the details and you see something more interesting: 52% of those people felt uncomfortable when they learned they couldn’t tell the difference. 71% were surprised. 80% said AI-generated music should be clearly labeled.
In other words, people don’t want to be fooled. They want to know what they’re listening to.
And when you look at what’s actually happening on streaming platforms, you start to understand why.
60,000 Tracks Per Day
Deezer is the only streaming platform that publicly reports how much AI content it receives. In January 2025, they counted 10,000 fully AI-generated tracks uploaded daily. By September, it was 30,000. By November, 50,000. By January 2026, it hit 60,000.
That’s 39% of all music delivered to the platform. Every single day.
But here’s the number that matters most: up to 85% of the streams on those AI tracks are fraudulent.
Not “some.” Not “a concerning percentage.” Eighty-five percent.
The fraud works like this: someone uses an AI tool to generate thousands of tracks. They upload them under fake artist names. Then they use bots, manipulated playlists, and click farms to rack up streams. The streaming platform’s royalty pool pays out based on streams, so money flows from legitimate artists to scammers running automation scripts.
Deezer caught this because they built detection systems. They tag AI music, exclude it from algorithmic recommendations, and demonetize fraudulent streams. Across their entire catalog, fraud accounts for about 8% of streams. But within AI-generated content specifically? It’s 85%.
The problem isn’t that AI music sounds real. The problem is that AI music is primarily being used to steal.
What the Other Platforms Are Dealing With
Deezer is relatively small compared to Spotify and Apple Music. But they’re the only ones publishing detailed AI upload data.
What we know about the larger platforms:
Apple Music’s VP Oliver Schusser revealed in early February that they identified and demonetized 2 billion fraudulent streams in 2025. Two billion. He called it a “zero-sum game” and announced they’re doubling their fraud penalties from a maximum of 25% to 50% of would-be royalties.
Spotify removed over 75 million “spammy” tracks in 2025 alone, acknowledging that AI has made it “easier than ever for bad actors to mass upload content.” They’ve rolled out new spam filters and banned unauthorized AI voice clones.
These aren’t hypothetical concerns. In September 2024, Michael Smith was charged with wire fraud for using AI to generate hundreds of thousands of songs, then using bot accounts to stream them. At his peak, prosecutors said he was generating 661,440 fake streams per day across Spotify and Apple Music.
The streaming economy is being gamed at scale, and AI is the tool that makes it possible.
Why Detection Isn’t the Answer
Here’s where it gets complicated.
Deezer’s detection system can identify 100% AI-generated tracks from tools like Suno and Udio with 99.8% accuracy. They’ve applied for patents. They’re licensing it to other companies.
But detection only solves part of the problem. It tells you what something is. It doesn’t tell you whether it’s legal, ethical, or worth paying for.
Consider the edge cases:
A poet uses Suno to set her lyrics to music. She writes every word, but the instrumentation is AI-generated. Xania Monet, created this way, became the first AI artist to chart on Billboard and signed a multimillion-dollar record deal.
A band called The Velvet Sundown racked up 500,000 monthly Spotify listeners before anyone realized they weren’t human. The creator later admitted it was a “hoax” to see how far AI music could go.
An AI-generated country track called “Walk My Walk” hit number one on the Billboard chart in November 2025.
Some of this is creative experimentation. Some is outright fraud. The technology to detect AI doesn’t help you distinguish between the two.
What Actually Matters
The 97% headline makes for good panic. But here’s what the Deezer study actually tells us about what people care about:
80% want AI music clearly labeled.
73% of streaming users want to know if their platform is recommending AI content.
69% think payouts for AI music should be lower than for human music.
65% say it shouldn’t be allowed to use copyrighted material to train AI models.
70% believe AI music threatens musicians’ livelihoods.
People aren’t asking “can I tell the difference?” They’re asking “is this fair?”
And that’s a much harder question to answer.
The Platform Response
Right now, the platforms are responding to AI music in three ways:
Detection and labeling. Deezer tags AI content and excludes it from recommendations. This is the most aggressive approach, but it requires building and maintaining sophisticated detection systems.
Fraud enforcement. Apple Music and Spotify are demonetizing fake streams and penalizing distributors. This targets the symptom (fraud) rather than the cause (AI music), but it’s easier to implement.
Policy updates. Spotify now requires distributors to disclose AI use and has banned unauthorized voice clones. These policies are new and still evolving.
What none of them are doing yet: fundamentally rethinking how streaming royalties work when anyone can generate unlimited content.
The current system pays based on streams. If you can generate 1,000 tracks in an hour and use bots to stream them, you can siphon money from artists who spent months on a single album. The detection systems help, but they’re playing defense against an offense that keeps getting more sophisticated.
What This Means for You
If you’re a creator using AI tools, the landscape is shifting fast.
The platforms are building detection systems. Your AI-assisted music may get flagged, even if you’re using it legitimately. The line between “AI-assisted” and “fully AI-generated” matters for how platforms treat your content.
The royalty pool is being diluted. Even with fraud detection, the sheer volume of AI content affects how money flows. More tracks competing for the same pool means less per stream for everyone.
The legal questions aren’t settled. Who owns AI-generated music? Can it be copyrighted? The Copyright Office says fully AI-generated works don’t get protection. Courts haven’t fully weighed in. Labels are still suing Suno and Udio.
The stigma is real. 40% of streaming users say they’d skip AI music without listening if they knew it was AI. 45% want the option to filter it out entirely. That’s not a majority, but it’s not nothing.
The Bigger Picture
The music industry has always adapted to new technology. Every format change, every distribution shift, every new tool has been met with predictions of doom followed by eventual adaptation.
AI is different in one crucial way: it doesn’t just change how music is made or distributed. It changes who can make it and how much can be created.
When 60,000 AI tracks hit a single platform every day, when 85% of those streams are fraudulent, when 2 billion fake streams get demonetized in a single year, we’re not talking about a creative tool. We’re talking about an industrial-scale challenge to the economics of music.
The 97% who couldn’t tell the difference in a listening test? They’re not the story.
The story is what we do about a system where anyone can create unlimited content, upload it instantly, and compete for the same royalty pool as artists who spend their lives making music.
That’s the conversation we should be having.
What to Watch
Detection technology licensing. Deezer is now selling their AI detection tool to other companies. If Spotify and Apple Music adopt similar systems, AI content will be flagged across all major platforms.
Royalty model changes. Deezer has already moved to an “artist-centric” payment model. Others may follow, shifting away from pure stream counts.
Legal developments. Sony is still suing Suno and Udio. The UK is debating copyright changes for AI training. The Copyright Office continues to clarify what can and can’t be protected.
Platform policies. Watch for changes in how distributors handle AI content, whether platforms require disclosure, and how recommendation algorithms treat synthetic music.
This is exactly the kind of complexity I’m tracking at ClearVerse. We’re building tools to help creators understand not just whether something sounds similar, but whether it creates actual legal risk. Because in a world where AI can make anything sound real, understanding the legal and economic reality becomes even more important.
If you’re navigating this, I’d love to hear what questions you have. What’s confusing? What’s worrying you? What do you wish the platforms would do?
Hit reply and tell me what you’re seeing.
We’re opening early access soon. Sign up at clearverse.ai to be first in line.
Want a walkthrough? If you’re a label, agency, or high-volume creator, I’d love to show you what we’re building. Just reply to this email or DM me on LinkedIn.
Or just subscribe. I’ll be writing more about AI, copyright, and creator economics.
— Christian
P.S. — Know someone trying to make sense of AI music? Send this their way.

