Algo-Rhythms: The future of album collaboration

Taryn Southern
Contributor
Taryn Southern is a digital artist and filmmaker. She is currently co-directing a documentary about the brain, and her album I AM AI is set to release this September. As a recovering YouTuber, she’s produced more than 1000 videos that have garnered more than 500 million views online.

One year ago, I began working on an album. I write vocal melodies and lyrics, while my partner does the composition. We both work on instrumentation, and complement each other well. The only odd part of the relationship is…my partner isn’t human.

It’s AI.

The relationship was born out of curiosity. Fear-driven headlines had been dominating my news feed for some time….headlines like: AI will take our jobs, our data, and eventually, our souls.

The arguments left me wondering. What’s really happening with AI? I stumbled across an article chronicling how AI was now being used to compose music. After a quick Google search, I found that song creation was just the tip of the iceberg – AI was also writing poems, editing films, and synthesizing art…and passing the Turing test.  

Eager to learn more, I began to experiment with every AI music making tool I could get my hands on. Amper, and Aiva to start, then later, IBM Watson and Google Magenta (there are countless others on the scene – AI Music, Jukedeck, and Landr to name a few).

My side project quickly evolved into a full-fledged album (“I AM AI”) along with a series of virtual reality music videos exploring the tenuous relationship between humans and technology. Last September, I released the first full single I produced with Amper, Break Free, which grabbed the attention – and curiosity – of the larger creative community.

Many inquired: are you worried AI will be more creative than you? No. In many ways, AI helped me become more creative, evolving my role into something resembling more of an editor or director.  I gave AI direction (in the form of data to learn from or parameters for the output), and it sends back raw material, which I then edit and arrange to create a cohesive song. It also allowed me to spend more time on other aspects of the creation process like the vocal melodies, lyrics, and music videos. It’s still creative, just different. But technophobes, rejoice: AI isn’t a perfect companion just yet.

What the future of our co-evolutionary world looks like with AI is anyone’s guess… but I’m optimistic.

Since there is still a lot of mystery surrounding the process of collaborating with AI, a breakdown is a helpful way to baseline the conversation. Here are the primary platforms I’ve used and my takeaways from collaborating with each one:

  1. Amper: co-founded by several musicians, Amper launched as a platform to compose original scores for productions. Currently free to the public, Amper has a  simple front-facing UI that you can use to modify parameters like BPM, instrumentation, and mood. No need to know code here!

Takeaway: Prior to working with Amper, I couldn’t recognize the sounds of different instruments, nor did I believe I had any particular musical preferences. Now, I recognize dozens of instruments, and have honed a particular creative style. For instance, I’m developed a strong taste for mixing electronic synthesizers with piano and deep bass, as you can hear in Life Support below, which I produced a 360 VR music video for.

  1. AIVA: Aiva is an award-winning deep learning algorithm, and the first to be registered with an authors’ rights society. I first met one of the founders, Pierre Barreau, in London, and we became really excited about the opportunity of combining classical learning styles with pop/synth instrumentation. AIVA uses deep learning and reinforcement learning to analyze thousands of pieces of classical music in specific styles and compose new scores.

Takeaway: My first track with AIVA, Lovesick, was created from the analysis of thousands of pieces from the late Romantic Period (early to mid 1800s.) The result is a Westworld-esque piano piece that I arranged into a pop-funk track with electronic synth elements. Collaborating with such unfamiliar source material was incredibly fun because it forces out of the box thinking. When arranging the track, I really had to ignore a lot of my “pop style” conditioning instincts.

  1. Watson Beat (IBM): While Watson Beat doesn’t have a front-end, the fine engineers at IBM gave me a few tutorials to get me started. For those who are more code confident, however, it’s a free, open source program you can download on GitHub. Within a few days, I was navigating the system, feeding it old time favorites to churn out dozens of stems of music with a stylistic twist (think Mary Had a little Lamb done in the style of a Peruvian Waltz?)

Takeaway: I was delighted to see the results of mixing various data inputs with unexpected genres, which also made me more aware of the underlying influences governing my own creative ideas. Because the output is MIDI (whereas Amper is a finished WAV or MP3 file), the artist has complete freedom over how the notes are transposed into instrumentation. I found my love of synthesizers by placing them on unlikely styles of music, and my first track with Watson Beat will likely be released this summer.

  1. Google Magenta: like Watson, Magenta is free and open source on Github. Some tools have easy front-facing interfaces (i.e. AI Duets) and others require a bit more back-end coding knowledge. What’s cool is the scope and number of tools that Google offers in its arsenal. Probably the most robust program for programmers.

Takeaway: With Magenta’s tools, you don’t have to solely focus on composition, you can also analyze sound. NSynth, for instance, allows you to combine the sounds of two different instruments (try mixing a cat with a harp!) Google has algorithms for studying sound tone and vibrational quality, which has many exciting applications.

It’s no surprise that AI elicits a lot of questions about our “specialness” as humans…but perhaps we’re focusing on the wrong argument. Humans always evolve with technology, and it’s what we choose to do with AI that matters. I believe that this is just the tip of the iceberg – and it will unlock creativity we can’t yet imagine.

For the budding enthusiast who lacks formal music training, AI can be a compelling tool – not just for learning, but as an entry-point for self-expression. Now anyone, anywhere, has the ability to create music – and that desire and ability to express is what makes us human.

from TechCrunch https://tcrn.ch/2wBTWqS
via IFTTT