pixabay foundry 1675412670766

Over the past few years, the use of artificial intelligence in the creative field has exploded. A new generation of image and text generators is delivering impressive results. Now artificial intelligence is also finding applications in music.

Last week, a team of researchers at Google released MusicLM — an AI-based music generator that converts text prompts into audio snippets. This is another example of how rapidly creative AI has innovated over an incredible few years.

As the music industry is still adapting to the disruptions caused by the internet and streaming services, there is a lot of interest in how artificial intelligence can change the way we create and experience music.

Automate Music Creation

Many artificial intelligence tools now allow users to automatically generate musical sequences or audio clips. Many are free and open source, such as Google’s Magenta toolkit.

The two most common approaches in AI music generation are: 1. Continuation, where the AI ​​continues a sequence of notes or waveform data, and 2. Harmony or accompaniment, where the AI ​​generates something to complement the input, such as chords and melodies.

Similar to text and image generation AI, music AI systems can be trained on many different datasets. For example, you can extend Chopin’s melodies with a system trained in the style of Bon Jovi — as beautifully demonstrated in OpenAI’s MuseNet.

These tools can provide great inspiration for artists suffering from “blank page syndrome,” even if the artist himself provides the final push. Creative stimulation is one of the immediate applications of today’s creative AI tools.

But where these tools may one day be more useful is in expanding musical expertise. Many people can write a tune, but few know how to skillfully manipulate chords to evoke emotion, or how to compose music in a variety of styles.

See also  Best Buy's 12-piece knife set costs $ 10, $ 40 air fryer and $ 15 digital coffee machine

While music AI tools can reliably do the work of talented musicians, a handful of companies are developing AI platforms for music generation.

Boomy takes the minimalist path: Users with no musical experience can create a song and then rearrange it with just a few clicks. Aiva has a similar approach, but allows finer control; artists can edit the resulting music on a note-by-note basis in a custom editor.

However, there is a catch. Machine learning techniques are notoriously elusive, and using AI to generate music is currently a bit of a stroke of luck; you may occasionally strike gold when using these tools, but you may not know why.

An ongoing challenge for those creating these AI tools is to allow more precise and deliberate control over what the generative algorithms produce.

New Ways to Manipulate Style and Sound Music AI tools also allow users to transform musical sequences or audio clips. For example, Google Magenta’s Differentiable Digital Signal Processing library technology performs timbre transformation.

Timbre is the technical term for the texture of sound – the difference between a engine and a siren. Using timbre shifting, you can change the timbre of a piece of audio.

These tools are a great example of how artificial intelligence can help musicians create rich orchestral compositions and achieve entirely new sounds.

For the first AI Song Contest in 2020, Sydney music studio Uncanny Valley (which I work with) used timbre shifting to incorporate singing koalas into the mix. Timbre-shifting joins a long history of synthesis techniques that have become musical instruments in their own right.

See also  Technology reporter teases "revolutionary" exclusive news in Famitsu next week, possibly related to Sega

Breaking Down Music The generation and transformation of music is only part of the equation. A longstanding problem in audio work is “source separation”. This means being able to break up recordings of tracks into individual instruments.

While it’s not perfect, AI-powered source separation has come a long way. Its use can be a big deal for artists; some of them don’t like that other people can “pick the lock” on their work.

At the same time, DJs and mashup artists will gain unprecedented control over how they mix and remix tracks. Source separation startup Audioshake claims this will open up new revenue streams for artists that allow for easier adaptation of music, such as television and film.

Artists may have to accept that this Pandora’s box has been opened, as was the case when synthesizers and drum machines first came along, displacing the need for musicians in some cases in some cases.

But be aware of this space, because copyright law does offer protection to artists from unauthorized manipulation. This is likely to become another gray area in the music industry that regulation may struggle to keep up with.

The popularity of new music experience playlists shows how much we enjoy listening to music for some “functional” purpose, such as focusing, relaxing, falling asleep, or exercising.

Startup Endel has made AI-powered functional music its business model, creating an infinite stream to help maximize certain cognitive states.

Endel’s music can be linked to physiological data such as the listener’s heart rate. Its manifesto, which draws heavily on mindfulness practices and makes bold suggestions that we can use “new technologies to help our bodies and brains adapt to our new world,” is hectic and anxious at its pace.

See also  Apple vs Epic: iPhone Maker opposes third-party payment links on the App Store

Other startups are also exploring functional music. Aimi is researching how personal electronic music producers can turn their music into infinite interactive streams.

Aimi’s Listener app invites fans to manipulate the system’s generation parameters, such as “Intensity” or “Texture,” or decide when drops occur. The listener interacts with the music, rather than passively listening.

It’s hard to say how much of the heavy lifting AI is doing in these applications — probably very little. Even so, these advancements are guiding the company’s vision for how the music experience will evolve in the future.

The Future of Music The above initiative conflicts with several long-established conventions, laws and cultural values ​​about how we create and share music.

Will copyright laws be tightened to ensure that companies that train AI systems on artists’ work compensate those artists? What is that compensation for? Do the new rules apply to source segregation? Will musicians using AI spend less time making music, or making more music than ever? If one thing is certain, it is change.

As a new generation of musicians grows up on the creative possibilities of AI, they will find new ways to use these tools.

This kind of turmoil is nothing new in the history of music technology, and neither powerful technology nor conventional convention should dictate our creative future.


Affiliate links may be automatically generated – see our Ethics Statement for details.

By Rebecca French

Rebecca French writes books about Technology and smartwatches. Her books have received starred reviews in Technology Shout, Publishers Weekly, Library Journal, and Booklist. She is a New York Times and a USA Today Bestseller...