For years, the music industry has been grappling with streaming fraud, i.e., those shady operations that inflate play counts through bots, click farms, and fake listeners. We've debated endlessly over who should be responsible: should Spotify or Apple Music be actively hunting down fraud? Or is it on artists, labels, or rights holders to flag suspicious behavior?
Now, things have escalated. Significantly so.
Generative AI has changed the game, quietly and dramatically. What used to be a numbers problem is now a content problem, with fake, AI-generated tracks flooding platforms and helping fraudsters game the system in ways we’ve never seen before.
The question we face now: who’s responsible for cleaning this up? Let’s discuss it in this article:
A New Breed of Streaming Fraud
Let’s be clear: streaming fraud isn’t new. It’s been around for years and fueled by economic incentives and loopholes in royalty distribution systems. But until recently, most of it was about gaming real tracks, looping them endlessly through bots, or inserting 30-second audio clips designed to exploit payout models.
AI has opened a new front.
Instead of relying on real tracks, fraudsters are now generating music using AI, uploading thousands of bot-made songs to platforms daily, and funneling plays toward them via fake accounts or streaming farms.
According to Music Business Worldwide, over 20,000 AI-generated tracks are being uploaded every day, many of them explicitly designed to manipulate the system.
This isn’t just noise pollution. It’s a calculated business model. And it’s a problem for the music industry.
The Velvet Sundown Fallout
A recent case that rocked the industry? Velvet Sundown, a fake artist persona that slipped through major DSPs with AI-generated tracks, bolstered by manipulated streaming figures. Spotify eventually took action after an investigation by Digital Music News, but not before the tracks had racked up significant streams and likely revenue.
This wasn’t an isolated case. It’s a proof-of-concept for how AI and streaming fraud can merge into one powerful, hard-to-detect force.
The Platforms' Role: Gatekeeper or Bystander?
So, back to the question: what’s the responsibility of streaming platforms now?
On one hand, it’s easy to argue they should be the gatekeepers. Platforms like Spotify, Deezer, and Apple Music control the infrastructure. They’re the ones with access to backend data, uploading volumes, streaming anomalies, and behavioral patterns that could reveal manipulation or AI abuse.
Some are starting to respond. Deezer recently launched the world’s first AI music tagging system, aiming to identify and label AI-generated tracks. It’s a step in the right direction, but it also raises a critical point: just detecting AI-generated content isn’t enough. The system has to be able to differentiate between legitimate use (AI-assisted music from real artists) and fraudulent intent (tracks created solely to game payouts).
Spotify, for its part, has started removing fraudulent content, such as the suspicious Blaze Foley covers that were found to be AI-generated fakes. But these efforts often seem reactive, not proactive.
Why This Isn’t Just a Tech Problem
It’s tempting to think of this as a purely technological arms race: AI-generated fraud vs. AI-powered detection systems. But that oversimplifies it. This is also a policy and ethical issue.
Platforms need to go beyond detection tools and start answering tough questions:
- Will they penalize users who upload large volumes of AI tracks with no transparency?
- Should AI-generated content be labeled clearly to listeners?
- What kind of verification should be required for artist accounts?
Right now, the lack of industry standards around AI-generated music leaves each platform to define its approach, creating loopholes and inconsistencies that fraudsters can exploit.
The Case for Shared Responsibility
Of course, it’s not just on platforms. Rights holders, distributors, and even listeners all have roles to play in surfacing fraudulent behavior. But the reality is, platforms are the first line of defense. They have the data. They have the infrastructure. They control the supply chain.
It’s no longer enough to simply be a neutral delivery system.
Music streaming services are the gatekeepers of a digital ecosystem that’s becoming increasingly vulnerable to manipulation. And as the tools of manipulation get more sophisticated (thanks to AI), the bar for platform responsibility has to rise.
Where Do We Go from Here?
The solution isn’t simple, but it starts with transparency and investment:
- Transparency in how AI-generated content is tagged, labeled, and surfaced on platforms.
- Investment in detection tools, human moderation, and internal policies that put ethics on par with scale.
It also means embracing regulation where appropriate. As WIPO pointed out, the legal and ethical frameworks for AI-generated music are still in their infancy. But waiting for legislation to catch up isn’t a viable strategy, especially when billions in streaming royalties are at stake.
AI isn’t the enemy. If you use it well, it’s a creative tool that can empower musicians and expand genres. But when it becomes a vehicle for fraud, enabled by weak oversight and profit-driven apathy, it threatens the credibility of the entire streaming economy.
It’s time for platforms to stop treating AI-generated fraud as a fringe issue. It’s here. It’s growing. And it’s their responsibility to lead the charge in keeping music real.
At Reprtoir, we can help with catalog management, release building, royalty accounting, music sharing, and more. Contact us for more details.