Velvet Sundown sparked a major controversy after revelations showed the band and its music were AI-generated. The "band" insisted they were real at first, but an associate later called it an "art hoax" and a marketing stunt.
The music streamed hundreds of thousands of times, raising concerns about fairness since a fake band got traction over real artists.
This isn’t new. Computer-generated music dates back to the 1950s, with a chemistry professor writing compositions by computer. In the 1980s, David Cope’s AI fooled trained musicians with music mimicking Chopin and Bach.
Artists like Holly Herndon and Grimes have pushed ethical use and licensing of AI voices. Meanwhile, "Deepfake Drake" scared record labels. Big names like Warner, Capitol, and Timbaland have signed AI-generated music projects.
AI tools from Izotope, LANDR, and Apple have long supported music mixing and mastering. Streaming recommendations rely heavily on machine learning.
The New Zealand government calls this moment “pivotal” for AI. But the key battle is the use of copyrighted music to train AI. Sony, Universal, and Warner sued two startups last year, including one used by Velvet Sundown, for using unlicensed recordings.
Musicians might find their work in AI datasets without permission. Tech companies aren’t required to disclose their training data. Copyright laws in New Zealand aren’t clear on AI-created works. Artists also can’t easily opt out.
Māori artists and APRA AMCOS voices warn about cultural appropriation risks and misuse. Research shows AI art displaces human creators, worrying already struggling local musicians.
AI-generated music fraud has hit Australia, with bots impersonating artists. French streamer Deezer claims 20,000 AI tracks are uploaded daily.
A criminal case last year charged a musician using bots for millions of AI streams — the first of its kind. Social media is flooded with AI-generated “slop,” drowning real artists. New Zealand laws lag on deepfakes and non-consensual imagery harming artists’ reputations.
The government favors innovation over strict rules but faces rising calls for regulation worldwide. The EU passed laws to force AI transparency in training data, pushing for music licensing around AI works. Australia’s senate favors AI guardrails. Denmark plans to give citizens copyright over their own facial, voice, and body data, protecting performers.
Ten years after music was dubbed the "canary in a coalmine" for tech disruptions, the way AI is handled in music will echo across culture and commerce.
Velvet Sundown previously insisted they were real in a tweet:
"We are Velvet Sundown. We’ve been making music together for years and we’re thrilled so many of you are discovering us now."
An associate later admitted:
"It’s an art hoax, a marketing stunt."
Sony, Universal and Warner sued AI startups for unlicensed use:
"There’s nothing fair about stealing an artist’s life’s work."
The New Zealand AI strategy says:
"We are at a pivotal moment as the AI-powered future approaches."
Māori writers raised concerns about:
"Potential cultural appropriation and misuse due to GenAI."
The criminal case on streaming fraud describes:
"World-first criminal case brought against musician who used bots to generate millions of streams for AI-created tracks."
European Union’s legislation requires:
"AI services to be transparent about what they have trained their models on."
Denmark’s new approach gives:
"Every citizen copyright of their own facial features, voice and body."
The music industry remains the bellwether for AI’s broader cultural impact.