From Mash-ups to Memes: the AI Music Landscape in 2023 and Beyond

“This is the first Drake song I’ve enjoyed in almost 3 years. And it’s from a fckin robot 😭😂😂 – YouTube Commenter, Heart on My Sleeve

Are AI robots taking over the airwaves?

Earlier this year, an American friend of mine couldn’t stop talking about this great new song she’d just heard – a single that was equal parts incredible and frightening to those of us working in the media industry. That song was Ghostwriter977’s ‘Heart on My Sleeve’, where AI-generated vocals from AI-Drake and AI-The Weeknd were combined to create a ‘new’ – and begrudgingly quite catchy – tune.

Since then, the conversation about AI’s place in music has exploded: Media outlets are running weekly thought-pieces; YouTube and the AI Hits list are slammed with computer-assisted mash-ups; and we have the likes of Google’s MusicLM and Meta’s MusicGen that create musical compositions based on text prompts such as “punchy double-bass and distorted guitar riff” or “ambient, soft sounding music I can study to.”

One AI music tool even estimates that it has been used to create 14.5m songs - or “13.92% of the world’s recorded music” – which is, if true, a frankly mind-blowing amount of content.

It’s clear that AI music is a rapidly growing space – as exciting as it is problematic – and as I’ve started to piece together my own thinking around it, I felt it would be interesting to log some of the talking points I’m seeing emerge from key players in this space about how the technology will impact the music industry in 2023 and beyond…

AI music will unlock creativity and may be an important equaliser when it comes to accessibility

“So much of the industry has often been about privilege, and actually this gives people a chance who haven't got that privilege to get involved” – Matt Griffiths, CEO, Youth Music

One of the loudest arguments for the use of AI in the content space is that it can potentially unlock creativity in a range of different people.

In a recent interview with Wired, Michael Sayman – an engineer with a hefty resume that covers Facebook, Google, Twitter, and Roblox – points to how social media disrupted distribution channels, and suggests that AI will have similar ramifications for creation and production:

“The record labels used to hold all the power—they were in charge of distribution, resources, production quality. We’ve seen social media replace distribution and discovery of music. Now we’re seeing AI expand production quality, so there’s opportunities for more people to get involved in the music creation process”

Just how social media changed the way that music was shared, AI now stands to change the way music is made. These tools could unlock new ways of creating music, among a whole new audience of audio makers – many of whom might not be able to carry or tune, or write a lyric.

However, Sayman isn’t just talking about in individuals (like me) who lack a shred of musical talent to begin with - there are other fascinating ways that this tool could help unlock creativity as well.

For instance, AI could have big implications for breaking down financial and accessibility barriers in the music industry – as detailed in recent research by Youth Music. Reading this report, we get a clear sense of how AI tools can help new artists deliver professional sounding content at a fraction of the budget - such as providing an accessible mastering solution for audio tracks, or supporting individual creators who can’t afford to hire teams to help them produce quality sounds.

Beyond financial barriers, proponents of AI tools claim that it could also unlock powerful creative experiences in those who lack the physical dexterity to play an instrument or control a soundboard – as touched on in this article by Dazed. While primarily discussing how AI could mitigate physical limitations for creators, the piece also opens up some interesting questions about the unique art that could be produced in the process: for instance, what kind of music could be created by a text-to-music AI tool, when the description is coming from a deaf individual?

How will AI unlock creativity in those who have found it harder to create in the musical space - such as those who are physically less able, or even deaf?

Giving people the ability to create where they otherwise couldn’t? Ensuring that anyone has the ability to share their creative vision, no matter their background or technical ability? What could people possibly dislike about all of this?!

Turns out, a few things…

AI music will ultimately replace artists and bands… and maybe create a load of awful music in the process

“Why is this so good wtf? No wonder why music [industries] are spooked, if AI is this good we don't need celebrities and artists anymore” – YouTube Comment, Heart on My Sleeve

There’s a deep existential dread that creeps into many conversations about AI in the creative space.

It’s a sentiment captured in some of the emotionally-charged commentary around the subject: we see it in Sting calling for us to raise pitchforks and “defend our human capital against AI” or Ice Cube’s criticism of AI’s unlicensed use of artists’ voices as “demonic”.

A large part of the strong reaction against AI in music circles around copyright – and who should get paid for helping these tools develop and learn. There’s a whole host of problematic ethical and legal questions about how AI tools are ‘creating’ the music that it is being delivered to the end-user – how songs are being used in training AI, which artists’ works are plagiarised, copyright violations, etc. We currently feel miles away from securing a model of AI training that respects the work and rights of artists (not just in music, but in all creative spaces as well) - but Meta’s clear articulation that its MusicGen tool was trained on licensed music might perhaps begin to point to a potential path forwards.

Beyond the big questions of ethics and ownership, another part of the anxiety around AI comes from a deep-seated worry that we are about to enter a white-collar industrial revolution – a period of rapid AI evolution that could precipitate waves of job-losses. And there are acute worries that the music industry wouldn’t be spared in these cuts.

This fear was manifest in a recent BPB survey, where 73% of the 1533 music producers polled were worried about AI taking their jobs in the future. More than half (57%) believed that AI tools would be able to replicate the ‘unique touch of a human producer’ at least to some extent. And these worries aren’t just limited to the production back-end, but to artists and bands as well.

Rick Beato, in a video sharing his predictions on how the new technology is going to shake up the music industry, anticipates that AI-musicians may become new sources of revenue. As a consumer, you’ll be able to listen to old music by The Beatles, for instance, but you may also see new albums by The AI-Beatles fighting for space on your Spotify homepage.

Will we soon see AI versions of big bands fighting for attention on our Spotify homepages?

It all seems very sci-fi – and with the above picture painted, you can see why some creators and producers might be anxious of AI’s impact on the music space.

However, there are some arguing that the idea of AI musicians taking over the charts could be more fiction than fact.

There’s a major strand of thinking that the most likely outcome of AI creeping into the audio space will not be artists being supplanted by robots, but just a massive influx of boring music. In a recent Wired article, writer and sound artist Marc Weidenbaum shared his thoughts – and his comments were less than glowing:

“Most people don’t make good art by copying and pasting. What makes pop work is that it’s always changing and always responding. [AI music] is just a feedback loop between systems”

In this strand of thinking, proponents argue that while there may be some workflows that could be impacted by AI a human will always need to sit at the centre of the process. It’s a similar line to that taken by the Human Artistry Campaign – a group that recognises the transformational potential of AI in this space, but who feels strongly that humans should always have a core role in the production of art:

“Developments in artificial intelligence are exciting and could advance the world farther than we ever thought possible. But AI can never replace human expression and artistry”

So let’s build on this a little more - if AI isn’t necessarily coming for musician’s jobs, how could it help improve them? Creators already have some thoughts…

AI will supercharge musical creativity beyond human limits

“There’s a narrative around a lot of this stuff that it’s scary dystopian… I’m trying to present another side: This is an opportunity” – Holly Herndon, Computer Musician

In a recent Rock Feed interview, Avenged Sevenfold frontman – M. Shadows – sat down to chat about the band’s career and his impression of the state of the music industry today. As with many media conversations nowadays, the focus eventually shifts to AI.

“I believe there are a bunch of little, funny use cases. It can help artists be more creative… if you’re writing a story, if you’re writing lyrics… and you prompt AI at this point, in the right way…give me twenty options here…  and I go, oh that’s interesting, how can I take that and go somewhere else with it? That’s using creativity 20X”

Shadows sees AI (in part) as an extension of the process that all artists go through – a ‘20X’ powering-up process by which a creator is informed and inspired by the myriad sources of content they are exposed to daily, but presented back to them in a more logical, comprehensive way.

It’s a similar story from American Hip Hop artist Curtiss King, whose album DIY2 made it to number one on the iTunes Hip Hop charts in 2022 and which used LyricStudio – an AI-powered lyric generation tool – to pull together some songs. In an interview with Forbes, he explained that using the tool was like “having someone in the studio that instantly fed me fresh imagery, words, and rhymes that would have normally taken me hours to research and recall”.

King and Shadows both describe AI tools not in terms of replacing creativity but of amplifying it – allowing them to supercharge the creative process they would normally undertake by speeding up and structuring their assimilation of references.

It’s this sentiment that the composer of the above piece, Oded Ben-Tal, taps into in a recent interview with Wired:

“Creativity is not a unified thing… It includes a lot of different aspects. It includes inspiration and innovation and craft and technique and graft. And there is no reason why computers cannot be involved in that situation in a way that is helpful.”

This focus on creativity – of AI as a stepping-stone to artistic improvement – that many of the early dabblers in the AI art world brought to the space.

Take Google’s early experiments with ML-Jam, for instance – a tool designed to integrate AI generative composition with live musical performance – where the engineer responsible for the programme saw it as “using machine learning models to push expert musicians out of their comfort zone… [and create] exciting rhythms and melodies.” Again, we can see it in the Flow Machines project – a piece of work aimed at “expanding the creativity of creators in music” and which was used to create the ‘first music album’ produced with AI tech (you can have a listen to the album ‘Hello World’ by SKYGGE here).

It is in this spirit that we’ve seen some more modern artists lean into the creative potential of AI – Grimes is perhaps the most famous artist to embrace this technology, creating an AI-powered voiceprint and inviting artists to license her voice (as long as they split the profits 50/50).

Looking at AI tools in this way positions it as an instrument, of sorts – something that is used by the artist to more fully express themselves or to create art that wouldn’t have been possible previously.

*Bonus* - AI will be used to flood our feeds with hyper-personal and weird musical memes

“People are going to stream this music not because it’s better than what’s coming out of labels or traditional musicians. That would be silly, to try to compete in that world. We want to make music that’s meaningful to a person” - Alex Jae Mitchell, Boomy CEO

OK - this last one’s just a bit of fun.

Some in this space point out that generative AI platforms will enable audiences to make music that isn’t necessarily ‘good’ in the classic sense, but music that is “meaningful” for them (as Mitchell points out in the quote from the top of this section).

What I find fascinating about this is that Mitchell is potentially touching on the memeification of music.

It’s the ability to use established formats and genres to create songs that are hyper-specific to audiences’ moods and moments: Whether that’s a personalised diss-track in the style of Eminem that you can deliver to your opponent after smashing them in a game of CoD, or a Valentine’s Day track that lovingly mashes together your partner’s favourite artist with your favourite lyrics.

Turning music into memes: it’s not exactly a new thing (as the greatest musician of our time discussed in his latest documentary), but AI stands to supercharge this space massively.

So where does AI music go from here?

I think it’s pretty clear that AI in the music space stands to be as transformational as it is disruptive -  from unlocking new skills, to powering-up existing artistic talent, or enabling music to be created in more intricate, novel, and complex ways.

Or, you know, just helping us make a bunch more memes.

There are also some pretty frightening potential outcomes that could lead to major job losses or artists getting taken advantage of. I, like many others, believe there is a huge need for the industry to come together and figure out a way forward that unlocks the creative power of these tools while also making sure that artists and rights-holders are fairly compensated.

It's easy to look at this space, in isolation, and feel that we won’t recognise the music landscape in 5, 10, or 15 years from now – dominated as it’s sure to be by AI-fuelled singles and novel, audio chimeras (the digital ghost of Bowie covering Carly Rae Jepsen, anyone?). But will it really be all that stark?

I referenced Sting’s take on all of this above – quick recap: the Englishman in New York is not a fan of AI in the music industry – but he makes an interesting simile between this new tech in the audio biz, and the way that CGI is used in the film space:

“It's similar to the way I watch a movie with CGI. It doesn't impress me at all… I get immediately bored when I see a computer-generated image. I imagine I will feel the same way about AI making music”

It’s an interesting take. Because, while CGI is often lambasted in media commentary for being over-the-top (as Sting suggest here) – it’s often subtly present in many films, in a way that we don’t realise – from amping up the background vistas in Brokeback Mountain to removing Superman’s impressive moustache in Justice league.

And (while I don’t want to stretch the simile too far) I don’t think I’d be too surprised if that’s where we end up with AI as well – a hugely powerful tool that can be seen overtly in some places, but will generally be used to quietly power-up the back-end of many songs from human artists that we enjoy listening to on a day-to-day basis.

Or maybe I’m entirely wrong and ‘Binary Solos’ will be all the rage in a few years’ time.

We’ll just have to see what the future of AI music has in store…  


Enjoy what you’ve read here today?

Why not sign up for our newsletter - and make sure you never miss our on any of our articles and interviews!

Previous
Previous

Why do Japanese audiences love manga?

Next
Next

From Discord to the Metaverse: Building New Kids Communities Online