A nonstop stream of AI generated death metal, this live stream on YouTube has been broadcasting since March 24. I’ve tuned in a few times and it’s been consistently on point with the genre’s distinct sounds: once or twice it leaned towards epic instrumental vibes, today was a heavy vocal sound.
This early example of neural synthesis is a proof-of-concept for how machine learning can drive new types of music software. Creating music can be as simple as specifying a set of music influences on which a model trains. We demonstrate a method for generating albums that imitate bands in experimental music genres previously unrealized by traditional synthesis techniques (e.g. additive, subtractive, FM, granular, concatenative). Raw audio is generated autoregressively in the time-domain using an unconditional SampleRNN. We create six albums this way. Artwork and song titles are also generated using materials from the original artists’ back catalog as training data. We try a fully-automated method and a human-curated method. We discuss its potential for machine-assisted production.
There’s a bit of a contradiction in their research statement of “eliminating humans from metal”, since the AI livestream will be opening for an actual band currently on tour, Archspire, but it does fall directly into place with current running theories of how music and media have been consistently dumbed down over the decades for mass appeal. Even our underground is now easily replicated by algorithms and bots, creating content specifically designed for maximizing enjoyment, without any organic nuances or authenticity.