In a world where generative AI music services are ripping off artists faster than you can say “intellectual property theft,” Benn Jordan’s latest video offers a glimmer of hope—or at least a compelling distraction. His proposal is still in the incubator stage, much like a half-baked soufflé, but it takes you on a whimsical journey into the chaotic realm of adversarial noise poisoning attacks. And yes, it’s as bizarre as it sounds.
If you’ve been hanging out with the cool kids (or you know, browsing the internet), you’ve likely heard the buzz: “Did you check out Benn Jordan’s new video?” Maybe you’ve even subjected your eyeballs to it. For those who haven’t, I present you with this:
Now, let’s dive into the nitty-gritty. Benn’s ideas are as promising as they are perplexing. First, these techniques operate on audio, slipping through what we like to call the “analog loophole.” Sound is everywhere, like your overly friendly neighbor who just won’t stop talking. Second, the ability to mix and match these methods adds a delightful layer of complexity, like a karaoke night where everyone’s slightly off-key.
It’s amusing; when I had the pleasure of discussing AI initiatives with Roland’s Paul McCabe, I suggested a whimsical design. Imagine a button that stops a live performance from being trained by an AI! Benn, however, went straight to the source—the data scientists—to uncover how this could be achieved, even during a live performance. So yes, this is actually a thing. And for the selected few who enjoy a good audio attack, there’s something utterly delightful about the targeted pressure wave at the 22-minute mark. Just imagine how your pet cat would react to that!
The real kicker, however, is that these advanced methods demand high-end GPUs and an overwhelming amount of electricity—like a massive energy vampire that won’t take no for an answer. The math doesn’t magically make things eco-friendly, especially when chip shortages loom like dark clouds in a superhero movie. But hey, now that we’ve planted the seed, the quest is on to find a more efficient approach. Consider this a proof of concept, much like your college roommate’s questionable ramen noodle diet.
In summation, I’m all in. I foresee a reality where apprehension about music training halts some folks from uploading their masterpieces to streaming platforms. Benn paints a vivid picture of distributors licensing this technology, providing producers with a little of that sweet peace of mind. Remember the early 2000s when we were busy safeguarding our music from fans? Now, it’s all about shielding it from generative AI. What a time to be alive!
While indulging in Benn’s video, you’ll find the peculiar universe of adversarial noise not just fascinating but also a creative outlet for imagining how to hack the relentless AI and social surveillance machine. This discussion traverses beyond just the “Poisonify” concept—truly essential as it stands.
Into the Data Science
Now, onto the harmony cloaking tools developed at the University of Tennessee Knoxville, where science meets avant-garde creativity:
Apparently, you can’t hear it, but the University of Tennessee has created a tool that “cloaks” songs to shield them from AI’s insatiable appetite. Who knew college would come with such high-stakes espionage?
The site and accompanying paper delve into:
HarmonyCloak: Making Music Unlearnable for Generative AI
Even if it doesn’t revolutionize digital distribution, Benn points out compelling AI detection algorithm research. Intriguingly, he created an algorithm capable of identifying whether music was crafted by an AI—definitely something to brag about at dinner parties.
Adversarial noise is a hot topic across various contexts; it serves as both a potential training aid for neural network classifiers and a tool for undermining them. Picture it as your friendly—but slightly mischievous—neighborhood trickster challenging you to a game of chess, rather than your typical villain scheming for world domination.
And speaking of rapid developments, Los Alamos National Laboratory—a name that almost sounds like a superhero secret lair—recently introduced a method that shields models from such audacious adversarial attacks. Surely, there’s a superhero in there somewhere, right?
The targeted pressure wave attacks explored by Benn may also raise eyebrows, as they’re used in sonic weapons against humans. So, who knew that music could double as a potential neuro-weapon? It’s almost poetic, really—like Mozart, but with added trauma.
But as for developing sonic countermeasures against machines, here are some resources:
Neural Adversarial Attacks with Random Noises
And the mechanisms that can be employed for attacks can also feature in training:
Modeling Adversarial Noise for Adversarial Training
Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking
Now, I’m no data scientist (more of a data enthusiast, really), but this is enough to keep me entertained for hours. My personal brand of fun? Composing music while simultaneously conjuring up unheard soundscapes by rewiring machine listening. And let’s be real: we might need a compilation of audible adversarial noise attacks because who doesn’t want to get lost in that chaos? Just make sure you use a secure method to distribute it; we wouldn’t want to summon any rogue AI overlords.
And if my musings have lulled you into a stupor, let’s wake you up with a dash of rage, shall we? Here’s an intriguing clip featuring Suno.ai’s founder Mikey Shulman, voice of the unyielding Anakin Skywalker:
Ah, the sweet taste of AI-generated nostalgia! Just like a reboot of a classic film, it somehow remains eerily relatable, often punctuated by existential dread.
