Artificial intelligence is making its way into various aspects of daily life, including music composition. Universal Music is now seeking to take a stand against this trend, as AI-generated music, based on existing works, is increasingly surfacing on music streaming platforms. music giant has reportedly reached out to major streaming services like Spotify and Apple, urging them to address the dissemination of AI-generated music. According to internal emails obtained by the Financial Times, Universal Music is determined to protect the rights of its artists and is prepared to take action if necessary.
The concern revolves around AI bots using existing songs by popular artists on streaming platforms to learn how to compose new music, often resulting in compositions that sound similar to the original artists. Universal Music stressed unauthorized its moral and commercial obligation to prevent use of its artists ‘ music and to ensure that platforms do not feature content that violates the rights of artists and other creators.
Universal Music represents well-known artists such as Sarah Conner, Rammstein, Eminem, and Billie Eilish, and is determined to safeguard their rights. The surge in AI programs capable of generating music pieces, including Google’s MusicLM, has led to a growing concern within the music industry. MusicLM, for example, can create music based on text descriptions, showcasing its advancements in both audio quality and adherence to the provided description.
Additionally, there have been significant achievements in the AI-generated music realm, such as the completion and premiere of Beethoven’s 10th Symphony in 2021, brought to life by an AI program. Despite this progress, there is skepticism from individuals within the music industry regarding AI’s ability to create truly original works of art.
A study from the Humboldt University of Berlin (HU) and the University of Essex revealed that AI is nearly on par with humans when it comes to creativity. This has raised concerns within the music industry, as there is fear that AI-generated music could Potentially harmful artists.
While experts like Antonio Krüger, director of the German Research Center for Artificial Intelligence, believe that AI may not be able to venture into entirely new creative territories, the music industry remains vigilant. The industry anticipates that platform partners will take measures to prevent their services from being used in ways that could potentially harm artists. As of now, the streaming services have not provided any statements regarding their stance on AI-generated music or the actions they plan to take.
Grimes, the musician, made a daring prediction on Sean Carroll’s Mindscape podcast. She expressed her belief that we are approaching the conclusion of human art with the arrival of Artificial General Intelligence (AGI). Grimes stated that once AGI is realized, it will surpass human artistry.
Her comments incited strong reactions on social media. Zola Jesus, another musician, labeled Grimes as the “voice of silicon fascist privilege,” while Devon Welsh, the frontman of Majical Cloudz, accused her of having a “bird’s-eye view of billionaires.” ”
Some musicians, however, disagree with Grimes and believe that the emergence of AI will not bring an end to human art, but rather inspire a new era of creativity. Artists like Arca, Holly Herndon, and Toro y Moi have embraced AI to explore innovative musical directions in recent years.
Furthermore, musicians and researchers worldwide are actively developing tools to make AI more accessible to artists. Despite existing obstacles such as copyright complexities, those working with AI in music hope that the technology will become a democratizing force and an integral part of everyday musical creation.
Arca, a producer renowned for collaborating with Kanye West and Björk on groundbreaking albums, expressed relief and excitement about the vast potential AI offers. He highlighted the feeling of possibility and the wide-open creative horizon that AI has provided him.
Artificial intelligence has been closely connected with music for a long time. In 1951, Alan Turing, a pioneer in computer science, constructed a machine that generated three simple melodies. In the 90s, David Bowie experimented with a digital lyric randomizer for inspiration. During inspiration. the same period, a music theory professor trained a computer program to compose new pieces in the style of Bach; when the audience compared its work to a real Bach piece, they couldn’t tell the difference.
The field of AI music has seen rapid advancement in recent years, thanks to dedicated research teams at universities, investments from major tech companies, and machine learning conferences like NeurIPS. In 2018, Francois Pachet, a longstanding AI music innovator, led the creation of the first pop album composed with artificial intelligence, Hello, World. Last year, the experimental singer-songwriter Holly Herndon garnered praise for Proto, an album in which she collaborated with an AI version of herself.
Despite the considerable progress, believe many that AI still has a long way to go before it can create hit songs on its own. Oleg Stavitsky, the CEO and co-founder of Endel, an app that generates sound environments, remarked, “AI music is simply not enough advanced to produce a song that you would prefer over a track by Drake.” For example, “Daddy’s Car,” a song created by AI in 2016 to mimic the Beatles, is a confusing mix of psychedelic rock elements that fails to cohesively come together.
Due to these limitations, very few mainstream pop songs are being created by AI. Instead, more exciting progress is being made in two seemingly opposing branches of music: the practical and the experimental.
Addressing Needs
On one end of the spectrum, AI music is meeting a simple demand: there is a greater need for music than ever before, due to the growing number of content creators on streaming and social media platforms. In the early 2010s, composers Drew Silverstein, Sam Estes, and Michael Hobe, while working on music for Hollywood films like The Dark Knight, were inundated with requests for simple background music for film, TV, or video games. “Many of our colleagues wanted music that they couldn’t afford or didn’t have time for — and they didn’t want to use stock music,” explained Silverstein.
To address this, the trio created Amper, which enables non-musicians to create music by specifying parameters such as genre, mood, and tempo. Amper’s music is now used in podcasts, commercials, and videos for companies like Reuters. According to Silverstein, “Previously, a video editor would search stock music and settle for something sufficient. Now, with Amper, they can say, ‘I know what I want, and in a matter of minutes, I can make it.’” In a recent test similar to the Turing test, the company found that consumers couldn’t differentiate between music composed by humans and that composed by Amper’s AI.
Similarly, Endel was created to fulfill a modern need: personalized soundscapes. Stavitsky realized that as people increasingly turn to headphones to navigate through the day, “there’s no playlist or song that can adapt to the context of whatever’s happening around you,” he says The app takes several real-time factors into account — including the weather, the listener’s heart rate, physical activity rate, and circadian rhythms — to generate gentle music designed to aid sleep, study, or relaxation.
Stavitsky mentions that users have effectively used Endel to address ADHD, insomnia, and tinnitus; a company representative reported that the app reached one million downloads by the end of January. Both Amper and Endel empower non-musicians to become involved in a process they may have been excluded from due to a lack of training or background. Silverstein mentioned that Amper will introduce a user-friendly interface this year so that anyone, not just companies, can use it to create songs. “Billions of individuals who may not have been part of the creative class can now be,” he says.
Advancing Music
Of course, creating simple tunes or enhanced background noise is vastly distinct from creating exceptional music. This represents a major concern that many have about AI in music: that it could reduce music to functional and generic sounds until every song sounds more or less the same. What if major labels use AI and algorithms to inundate us with simple catchy tunes indefinitely?
However, musician Claire Evans of the Los Angeles-based electropop band YACHT points out that such opportunistic optimization already lies at the core of the music industry: “That algorithm exists, and it’s called Dr. Luke,” she says, referring to the once exceedingly prevalent producer who creates pop hits based on specific formulas. Thus, it falls upon forward-thinking musicians to leverage the technology for the opposite purpose: to resist standardization and explore uncharted territories that they couldn’t have otherwise.
The band YACHT used a machine learning system to create their latest album, Chain Tripping. They fed their entire music catalog into the system and then selected the most interesting melodies and lyrics from the output to use in their songs. The resulting dance pop album was unconventional and challenging to both listen to and perform.
YACHT’s member Evans pointed out that musicians often underestimate how much their playing is influenced by their physical experiences and habits. Learning the new AI-generated music was difficult for the band, as it deviated slightly from their familiar patterns. This venture led to YACHT’s first Grammy nomination after two decades, for best immersive audio album.
Musician Ash Koosha’s work with AI led to an unexpected emotional breakthrough. He created an AI pop star named Yona, which generates songs using software. Some of Yona’s lyrics were surprisingly vulnerable, which Koosha found astounding. He noted that expressing such raw emotion is something most humans struggle to do unless triggered.
In Berlin, the hacker duo Dadabots is using AI to create musical disorientation and chaos. They are experimenting with AI-generated death metal livestreams and collaborating with avant-garde songwriters to develop new tools. Co-founder CJ Carr views AI as both a trainer for musicians and a creative force that produces unprecedented sounds and emotions.
For other artists, AI serves as a gateway to revive pre-recorded music. A new version of the 2012 cult classic “Jasmine” by Jai Paul appeared online last summer. This AI-generated track evolves continuously, deviating from the original, and offers an infinite, infectious jam session experience.
The London-based company Bronze created this AI-generated track, aiming to liberate music from the static nature of recordings. They wanted to present music as a constantly evolving form, just as it exists in their hands.
Bronze’s project caught the attention of Arca, known for her work on albums by Kanye West, Björk, and FKA Twigs. She saw potential in the technology to bridge the gap between live and recorded music. Collaborating with Bronze, she worked on an installation by the French artist Philippe Parreno at New York’s Museum of Modern Art.
Arca found that experiencing the music she had ostensibly composed was both unusual and captivating. She mentioned the freedom in creating an ecosystem where things happen organically, rather than making every microdecision. She also revealed plans for new music projects using Bronze’s technology.
It discusses the current state and future of AI in music.
Many express concerns about the potential displacement of musicians by AI technology, which is being used by creators like Arca to foster innovation. However, Ash Koosha points out that similar fears have arisen with every major technological advancement of the past century. This fear is likened to that of guitarists in the 1970s, who rejected synthesizers. Despite some individuals being replaced, this resistance led to the emergence of a new generation of home producers and the rise of hip-hop and house music.
Francois Pachet, director of Spotify’s Creator Technology Research Lab, asserts that we are still at the initial stages of experimenting with AI-generated music. He notes that the quantity of music produced by AI is minimal compared to the amount of research being conducted in this field.
Legal battles are expected to arise once more AI-created music is released to the public. The existing copyright laws do not account for AI-generated music, leaving ambiguity regarding ownership rights. Questions about whether the rights belong to the programmer, the original musician whose work was used to train the AI, or even the AI itself remain unanswered. This poses concerns that musicians could potentially have no legal recourse if a company used AI to replicate their work without permission.
Despite these pending issues, musicians worldwide are diligently working to make their tools accessible to aspiring music-makers. The goal is to inspire young producers to create innovative music that transcends current imagination.
AI is revolutionizing the music industry by transforming the creation and consumption of music. Many artists have shifted from traditional production methods to utilizing AI in various stages of music production. From composing and mastering to identifying songs and curating personalized playlists, AI is reshaping the music landscape.
Before we delve deeper into this topic, let’s clarify what we mean by artificial intelligence (AI). Some people are startled by the term “artificial intelligence” as they believe that machines cannot possess intelligence. Philosophically, a machine’s intelligence is limited to the information it receives from humans and the evaluations made by humans. There’s an ongoing debate about whether AI can have its own consciousness. Nevertheless, if intelligence is defined as the ability to solve problems through thought, then AI certainly possesses intelligence.
AI has diverse applications, including composing new music, creating unique mashups, and even developing robotic musicians. These applications are seemingly limitless, but they are constrained by programming and the information provided by humans. AI can also construct lyrics with specific emotions, explore new musical genres, and push the boundaries of music. AI-supported songwriting can help overcome writer’s blocks, offering unusual suggestions that may unlock creativity. Music based on self-learning algorithms leads us into uncharted digital territory, where the future of music remains a deeply hidden secret waiting to be unlocked.
AI’s impact on the music industry is not a novel subject but a longstanding theme. For instance, AI-generated mindfulness ambient music, royalty-free music for content creators, and automated mixing and mastering have become substantial industries over the past five years. Additionally, streaming services leverage AI to provide personalized music recommendations based on the analysis of specific musical qualities. AI and machine learning have significantly transformed the music industry, making it easier than ever before to create and enjoy delightful music.
Concerns are reasonable, but fears are often baseless.
Certainly, there are potential dangers. One of the primary worries is that AI-generated music could make human musicians and songwriters obsolete, displacing them and leading to unemployment. However, these concerns should be taken with a grain of salt. Ultimately, there is one thing AI cannot replicate: the creativity of a musician. The fear that AI music could result in an oversaturation among listeners due to repetitive sounds or styles also seems unfounded. After all, individuals still make their own decisions about their musical preferences. If a genre is at risk of becoming monotonous, consumers naturally turn away, rather than rejecting music altogether. In this context, AI music might at most lead to an oversaturation of itself.
As with any new development since the invention of sliced bread, it is crucial to use artificial intelligence ethically, morally, and within the bounds of the law. A copyright violation by AI remains a copyright violation; a song created by artificial intelligence remains an artificial creation. These scenarios do not originate from AI itself. The existing legal framework remains unchanged.
AI: Attempting to decode Mozart’s genetic makeup
In recent times, various noteworthy projects have been carried out using artificial intelligence. For instance, in 2021, the music of the composer was visualized in several projects for the 100th Mozart Festival. These projects aimed to uncover the musical essence of the genius. A research team from the University of Würzburg created an AI named “Mozart Jukebox” as well as an augmented reality (AR) app. It was demonstrated that there is not just one AI, but that it evolves based on user interactions. Thus, humans are far from being excluded from the process.
Artificial intelligence brings musicians back to life
Also in 2021, “The Lost Tapes of the 27 Club” were released, featuring vocals as the only “real” element of the recordings. However, the vocals did not originate from the original artists but from musicians in cover bands who specialized in emulating their idols. Using the Google AI Magenta, songs by Kurt Cobain with Nirvana, Jim Morrison with the Doors, Amy Winehouse, and Jimi Hendrix were (re)composed. Subsequently, the music was created using digital instruments controlled by computers. This was not the first AI music project, as similar projects had previously produced music in the style of the Beatles, Bach, or Beethoven.
AI: A unique form of human-machine collaboration
The fact that the compositions of contemporary artists are not solely the result of the “human factor” is often imperceptible in many productions, as long as AI is utilized tastefully. In contrast, some deliberately emphasize the role of digital technology. For example, in 2018, Taryn Southern released an album titled “I am AI,” which was composed and produced using four music programs: AIVA, Google Magenta, Watson Beat, and Amper Music.
Songs featuring data-driven voices and sounds
Holly Herndon, along with her partner Mat Dryhurst, developed “baby AI Spawn,” primarily fueled by data-driven voices and sounds. Prior to this, she had already released AI-generated songs and eventually the full album “Proto.” Some even refer to Holly as the Godmother of AI music. Undoubtedly, there are numerous musicians who could claim this title for themselves. How about Kraftwerk, for example?
Stylistic imitation by AI
It is noteworthy that researchers have recurrently strived to analyze and replicate the distinctive stylistic nuances of musicians. For instance, scientists at the SONY CSL Research Lab wrote the first complete songs using AI, created on FlowMachines, a system that learns musical styles from an extensive database. The song “Daddy’s Car” is not by the Beatles, but it is composed in their style – as interpreted by the scientists.
We can see that AI music presents forward-thinking and equally creative opportunities for the future of music. The quintessentially human characteristic – emotional creativity – is unlikely to be hindered. Ultimately, it remains the driving force of humanity.
Last November, at the Stockholm University of the Arts, a human and an AI collaboratively created music. The performance commenced with musician David Dolan playing a grand piano into a microphone. As he played, a computer system, designed and supervised by composer and Kingston University researcher Oded Ben-Tal, “listened” to the piece, extracting data on pitch, rhythm, and timbre. Subsequently, it added its own accompaniment, improvising just like a human would. Some sounds were transformations of Dolan’s piano, while others were new sounds synthesized in real-time. The performance was chilling, ambient, and textured.
This situation, where a machine and a person work together peacefully, seems incompatible with the ongoing debate about artists versus machines. You may have heard that AI is taking over journalism, producing error-filled SEO copy. Or that AI is taking from illustrators, leading to lawsuits against Stability AI, DeviantArt, and Midjourney for copyright infringement. Or that computers are attempting to rap: Capitol Records dropped the “robot rapper” FN Meka following criticism that the character was “an amalgamation of gross stereotypes.” Most recently, Noam Chomsky claimed that ChatGPT demonstrates the “banality of evil.”
These concerns fit neatly with worries about automation, that machines will replace people—or, more accurately, that those in control of these machines will use them to replace everyone else. However, some artists, especially musicians, are quietly interested in how these models might complement human creativity, and not just in a “hey, this AI plays Nirvana” way. They are exploring how AI and humans might collaborate rather than compete.
“Creativity is not a singular thing,” says Ben-Tal, speaking over Zoom. “It encompasses many different aspects, including inspiration, innovation, craft, technique, and hard work. And there is no reason why computers cannot be involved in that process in a helpful way.”
The idea that computers might compose music has been around as long as the computer itself. Mathematician and writer Ada Lovelace once suggested that Charles Babbage’s steam-powered Analytical Engine, considered the first computer, could be used for purposes other than numbers. In her view, if the “science of harmony and of musical composition” could be adapted for use with Babbage’s machine, “the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
The earliest book on the topic, “Experimental Music: Composition with an Electronic Computer,” written by American composer and professor Lejaren Hiller Jr. and mathematician Leonard Isaacson, was published in 1959. In popular music, artists such as Ash Koosha, Arca, and most notably Holly Herndon have utilized AI to enhance their work. When Herndon talked to WIRED last year about her free-to-use, “AI-powered vocal clone,” Holly+, she succinctly explained the tension between technology and music. “There’s a narrative surrounding a lot of this stuff, that it’s a scary dystopia,” she said. “I’m trying to present another perspective: This is an opportunity.”
Musicians have also responded to the general unease created by ChatGPT and Bing’s AI chatbot. Bogdan Raczynski, after reading transcripts of the chatbots’ viral conversations with humans, expressed, via email, that he sensed “fear, confusion, regret, caution, backpedaling, and so on” in the model’s responses. It’s not that he believes the chatbot has feelings, but rather that “the emotions it evokes in humans are very real,” he explains. “And for me, those emotions have been concern and sympathy.” In reaction, he has released a “series of comforting live performances for AI” (emphasis mine).
Ben-Tal says his work offers an alternative to “the human-versus-machine narrative.” He acknowledges that generative AI can be unsettling because, to some extent, it demonstrates a type of creativity usually attributed to humans, but he adds that it is also simply another technology, another instrument, in a tradition that goes back to the bone flute. For him, generative AI is akin to turntables: When artists discovered they could use them to scratch records and sample their sounds, they created entirely new genres.
In this regard, copyright may require a significant reconsideration: Google has refrained from releasing its MusicLM model, which converts text into music, due to “the risks associated with music generation, in particular, the potential misappropriation of creative content.” In a 2019 paper, Ben-Tal and other researchers urged readers to envision a musician holodeck, an endpoint for music AI, which has archived all recorded music and can generate or retrieve any conceivable sound upon request.
Where do songwriters fit into this future? And before that, can songwriters protect themselves against plagiarism? Should audiences be informed, as WIRED does in its articles, when AI is used?
Yet these models still offer appealing creative capabilities. In the short term, Ben-Tal explains, musicians can use an AI, as he did, to improvise with a pianist beyond their skill level. Or they can draw inspiration from an AI’s compositions, perhaps in a genre with which they are not familiar, such as Irish folk music.
And in the long run, AI might realize a more audacious (though controversial) fantasy: It could effortlessly bring an artist’s vision to life. “Composers, you know, we come up with ideas of what music we would like to create, but then translating these into sounds or scores, realizing those ideas, is quite a laborious task,” he says. “If there was a wire that we could plug in and get this out, that could be very fantastic and wonderful.”
There are already algorithms disrupting the music industry. Author Cory Doctorow has discussed Spotify’s impact, highlighting how playlists encourage artists to prioritize music that fits into specific categories, and how this influences what audiences listen to. With the introduction of AI into this landscape, musicians may face even more challenges. For example, what if Spotify uses AI to create its own artists and promotes them over human musicians?
Raczynski is hopeful that he can adapt to these changes and not be overshadowed by them. He acknowledges that he’ll need to engage with AI in some way in order to survive in this industry. However, he aims to develop a mutually beneficial relationship with AI, rather than solely focusing on his own interests.
AI music capabilities have been quietly present in the music industry for many years. It was not until ChatGPT was released in 2022 that the broader conversation about artificial intelligence began to spread in mainstream media. Currently, some musicians and music industry professionals are excited about the potential of AI music, while others are cautious, especially due to the early stage of regulation in this area. According to a study by the music distribution company Ditto, almost 60 percent of surveyed artists use AI in their music projects, while 28 percent wouldn’t use AI for music purposes.
Christopher Wares, Assistant Chair of Music Business/Management at Berklee College of Music, is a supporter of AI music technology. He wrote his master’s thesis in 2016 on why Warner Music should invest in artificial intelligence (spoiler alert: they did, along with other major labels). Wares has incorporated AI into his Berklee courses and has observed varied responses among students.
“Some of my students are enthusiastic about AI and are already utilizing it in different ways, while others are not interested,” says Wares. “There are intense debates, and I encourage my students to embrace this technology and explore new ways to enhance their creative processes.”
Another proponent of AI music technology is Ben Camp, Associate Professor of Songwriting at Berklee College of Music and author of Songs Unmasked: Techniques and Tips for Songwriting Success. Camp became interested in AI music technology in 2016 after hearing “Daddy’s Car,” one of the first AI-generated pop songs based on music by the Beatles.
Camp also allows their students to explore AI in the classroom, with the condition that they verify any information obtained from ChatGPT or similar large language models.
“I believe everyone should make their own decision about it,” says Camp. “I mean, I have friends who still use flip phones because they are uncomfortable with having all their information on their phone. I also have friends who still have landlines. So I’m not saying, ‘Hey everyone, you need to do this.’ But it’s definitely here to stay. It’s not going away. It’s only going to improve.”
Whether you are actively using AI in your music or have reservations, it is increasingly evident that AI will play a significant role in the music industry. We will discuss the current state of AI in the music industry, including the available tools, with insights from Wares and Camp.
What is AI Music?
Before explaining what AI music involves, let’s first define artificial intelligence. Here is Wares’ definition:
“Artificial intelligence is the computational brainpower that enables machines to imitate human thinking or behavior, such as problem-solving, learning, or recognizing patterns.”
In the context of music, AI technology has advanced to the point where it can create, compose, and improve musical content previously performed by humans. AI music can take various forms and offer different types of assistance, from composing an entire song to enhancing specific aspects of a composition, to mixing and mastering a production, to voice cloning, and more. We will also outline specific AI music tools capable of performing these tasks, which have raised concerns about copyright issues.
Copyright and AI Music
One of the most debated issues concerning AI in the music industry revolves around who profits from a work created using AI, particularly if the algorithm is trained using existing copyrighted material. In March 2023, the U.S. Copyright Office initiated an investigation into copyright issues related to artificial intelligence. Camp is optimistic that regulators will intervene to address this, but is worried that finding a solution is not straightforward due to the outdated nature of the US copyright system within which artists work.
“The laws and precedents that have shaped our modern copyright system do not align with the current state of music,” says Camp. “I believe creators should receive attribution, credit, and compensation. However, the system through which we are addressing this is severely outdated.”
The legality of AI-generated music remains uncertain, prompting discussion about how to ensure artists are appropriately recognized, compensated, and willing participants in the use of their work or image for AI, while still allowing for creative use of AI technology in music. At present, it’s unclear where the line between inspiration and infringement lies, as some record labels are beginning to push back.
In May 2023, Universal Music Group called on streaming services to block AI-generated music, alleging unauthorized use of their artists’ music to train AI algorithms and threatening legal action. In response, Spotify removed 7% of AI-generated music from its platform, amounting to tens of thousands of songs.
By July 2023, UMG had appealed to Congress for nationwide policies safeguarding creators from AI copyright violations. The record label is among 40 participants supporting the Human Artistry Campaign, an organization advocating for responsible AI use.
Regarding voice cloning, while there is limited legal precedent, for public figures, it may implicate their right to control the use of their likeness, name, and voice. Notably, a TikToker known as Ghostwriter used AI to create a simulated duet between Drake and The Weeknd titled “Heart on My Sleeve,” which was subsequently taken down, though unauthorized versions persist online.
The replication of artists’ names and likenesses using AI raises concerns within the music and entertainment industries. Protecting writers from having their work used to train AI systems and actors from unauthorized replication of their image and voice without consent is a key demand of the current SAG-AFTRA strike.
AI’s ethical considerations extend beyond copyright, with issues such as biased data set training posing immediate challenges. For instance, AI rapper FN Meka, signed by Capitol Music Group in 2022, was dropped for perpetuating racial stereotypes.
One ethical concern is the training process known as “reinforcement learning,” involving human feedback on potentially disturbing content. A recent episode of The Journal podcast from the Wall Street Journal highlighted the mental health toll on data workers tasked with evaluating such content for AI training.
Lastly, we can explore various AI music tools. At the Berklee Onsite 2023 music conference, Wares introduced several AI music tools available for exploration and highlighted others that are currently in development.
BandLab SongStarter
The SongStarter app by BandLab is a song generator powered by AI that allows you to select a music genre, input lyrics (including emojis), and it will produce ideas that are free from royalties. You can then transfer these ideas to their studio feature to personalize them. This is an excellent way to kickstart a song if you need some initial inspiration.
Midjourney
Midjourney, a popular AI image generator, can be utilized to create artwork for albums, songs, posters, Spotify loops, merchandise, and more. What distinguishes it from other AI image generators is its surreal, dream-like style, which is well-suited for musical projects. The software is user-friendly, but it does have a learning curve. As with many new tech programs, it’s advisable to watch some tutorials before getting started.
Mix Monolith
The Mix Monolith plug-in is an automated mixing system from AYAIC designed to balance your mix. According to the developer in an article from Mix Online, “its purpose is not to automatically create a finished mix, but to establish the fundamental gain relationships between tracks and ensure proper gain staging.”
LANDR AI Mastering
LANDR’s AI mastering tool enables you to drag and drop your track into the program, which will then analyze it and offer straightforward choices for style and loudness. After making these selections, the program will master your track and provide additional options for file type and distribution method. LANDR boasts having mastered over 20 million tracks through their program.
AIVA
AIVA is an AI program for composition trained with over 30,000 iconic scores from history. You can choose from various preset music styles, ranging from modern cinematic to twentieth-century cinematic, and tango to jazz. You also have the option to input the key signature, time signature, pacing, instrumentation, duration, and more. If you’re unsure, AIVA can do it for you. Finally, you can generate a track, adjust the instrumentation, and download various file types. As a subscriber, you have full copyright license to anything you create.
ChatGPT for Musicians
ChatGPT from OpenAI is one of the most widely used AI tools and has numerous applications for musicians. The company is currently under investigation by the Federal Trade Commission, so it’s important to take precautions about the information you share with ChatGPT as well as verify any facts you retrieve from it.
Having said that, the program has the potential to reduce the time spent on tasks that divert you from actually creating music. Wares and Camp have been experimenting with ChatGPT since its release and have some specific prompts that could be useful for musicians and music professionals.
Social Media Strategy
Managing social media can be time-consuming for a DIY musician, and ChatGPT can help ease the burden. Wares suggests that you can start by prompting ChatGPT with details about the type of artist you are, the music genre you play, and your passions and interests. Then, you can request 30 pieces of content for the next 30 days for platforms like TikTok, Instagram, Facebook, or any other social media platform you use. Not only can you ask for social media content ideas, but you can also ask ChatGPT to generate optimized captions and hashtags. Find some ChatGPT social media tips here.
Tech Riders for Touring
When embarking on a tour, musicians often enlist someone to create a technical rider, which outlines all the specific requirements for their show. This could include equipment, stage setup, sound engineering, lighting, hospitality considerations, performance contracts, tour routes, venue options, ticket prices, and more. Wares says that ChatGPT can be used to draft this technical rider and recently collaborated with a band to plan their tour using this technology.
“We began by creating their technical rider, which included backline requirements, a detailed input list, and specific microphone recommendations, all based on a few simple prompts,” says Wares. “Then we requested tour routing suggestions in the Northeast, ticket pricing advice, as well as ideas for merchandise tailored to the unique interests and demographics of the band’s fanbase. What would have taken days to complete was done in less than an hour.”
Lyric Writing
If you need assistance in kickstarting song lyrics, seek inspiration, or require word suggestions, ChatGPT can be a valuable tool for songwriting. Camp provides an example of collaborating with Berklee alum, Julia Perry (who interviewed them for a Berklee Now article about AI and music) to generate song ideas using ChatGPT.
“We were discussing the magic of the universe and how she wanted to convey this profound, unknowable truth about the universe,” says Camp. “I provided ChatGPT with a detailed explanation of everything she said in two or three paragraphs and asked it to give me 20 opening lines for this song.”
They ended up using one of the 20 options as a starting point for a new song.
Can ChatGPT assist with a range of content and copywriting tasks, including drafting a press release, creating bios of various lengths, developing an album release strategy, composing blog posts, crafting website copy, and writing email pitches?
In an ideal scenario, having a lawyer to create and review agreements and contracts would be the best option. However, this may not always be practical or affordable. In such cases, ChatGPT could help in drafting agreements, providing an alternative to having no agreement at all. This could be useful for creating management agreements, band agreements, split sheets, performance agreements, and more. Nonetheless, engaging an entertainment lawyer is always the preferred choice whenever feasible.
When it comes to AI and other emerging technologies, one recurring theme is that they are expected to play a significant role in the music industry (and most industries) in the future. Ignoring these technologies is unlikely to benefit the industry’s future leaders.
Wares believes that AI can enhance productivity and support the creative process of students, allowing them to focus on their primary interests, such as creating and playing music or exploring new business ideas. However, as an educator, it’s important to ensure that students don’t overly rely on these tools, and efforts are constantly made to use AI to help develop their critical thinking skills.
Camp agrees and advises individuals to do what feels comfortable for them as AI continues to advance. While encouraging the adoption of technology to stay current and relevant, Camp acknowledges that not everyone needs to use AI, drawing a comparison to people who still use landlines or prefer buying vinyl records. AI is making a significant impact, but it’s a choice whether to embrace it.
According to a survey from Tracklib, a platform that provides licensed samples and stems for music production, a quarter of music producers are currently utilizing AI in their craft. However, the survey also revealed a significant level of resistance to the technology, primarily due to concerns about losing creative control.
Of the producers using AI, a majority (73.9%) employ it mainly for stem separation. Fewer use it for mastering and EQ plugins (45.5%), generating elements for songs (21.2%), or creating entire songs (3%). Among those not using AI, the majority (82.2%) cite artistic and creative reasons for their resistance, with smaller percentages mentioning concerns about quality (34.5%), cost (14.3%), and copyright (10.2%).
The survey also found a significant disparity in perceptions of “assistive AI,” which aids in the music creation process, and “generative AI,” which directly creates elements of songs or entire songs. While most respondents hold a negative view of generative AI, there is a more positive perception of assistive AI, although it falls short of majority support.
Notably, the youngest respondents were most strongly opposed to generative AI, while the oldest respondents exhibited the strongest opposition to assistive AI.
Willingness to pay for AI technology was generally low, as nearly three-quarters of AI tool users utilized only free tools. Among “beginner” producers, some expressed a willingness to pay, but very few were prepared to pay $25 or more per month.
Overall, 70% of respondents anticipate that AI will have a “large” or “massive” impact on music production in the future, while 29% expect it to have “some” impact. Only 1% foresee no impact from AI.
Tracklib conducted a survey with 1,107 music producers, with only 10% being classified as full-time professionals. Among the respondents, 58% were described as “ambitious” and aspiring to pursue music production as a career. The remaining producers were categorized as “beginners” or “hobbyists.”
The survey respondents were geographically distributed as follows: 54% from the European Union or United Kingdom, 34% from North America, and 12% from the rest of the world.
Despite the majority of producers showing resistance to AI technology, Tracklib foresees continued adoption of the technology, placing music AI in the “early majority” phase of adoption based on a model of technology adoption that divides the uptake of new technologies into five phases.
In a survey by DIY distributor TuneCore and its parent company, Believe, it was found that 27% of indie music artists had utilized AI in some capacity. Among the artists who used AI tools, 57% had used it for creating artwork, 37% for promotional assets, and 20% for engaging with fans.
Approximately half of the survey respondents expressed willingness to license their music for machine learning, while a third expressed consent for their music, voice, or artwork to be used in generative AI.
Established in 2018, Stockholm-based Tracklib offers a library of over 100,000 songs from 400 labels and publishers. Earlier this year, it introduced Sounds, expanding its platform to include a library of royalty-free loops and one-shots for paying subscribers.
In 2021, Tracklib disclosed that it had secured USD $21.2 million in funding from investors including Sony Innovation Fund, WndrCo, former NBA player and producer Baron Davis, and Spinnin Records co-founder Eelko van Kooten.
Earlier this year, Bad Bunny denied rumors of a new song with Justin Bieber, but a song featuring what seemed like their voices circulated on TikTok, generated millions of likes. The song was created with AI by an artist named FlowGPT, imitating the voices of Bad Bunny, Bieber, and Daddy Yankee in a reggaeton anthem. Bad Bunny disapproved of the song, calling it a “poor song” in Spanish, and discouraged his fans from listening. However, many fans of all three megastars enjoyed it nonetheless.
The song and the conflicting reactions to it exemplify the complex impact of AI in the music industry. Advances in machine learning have enabled individuals to replicate the sound of their musical idols from their homes. Some argue that these advances will democratize music creation, while others express concern about the co-opting and commodification of artists’ voices and styles for others’ benefit. The tension between safeguarding artists, driving innovation, and defining the collaborative roles of humans and machines in music creation will be explored for years to come.
Lex Dromgoole, a musician and AI technologist, raises thought-provoking questions: “If there’s a surge in music created at an immense scale and speed, how does that challenge our understanding of human creativity? Where does imagination fit into this? How do we infuse our creations with character?”
AI is currently being utilized by music producers to handle routine tasks. Vocal pitch correction and expedited mixing and mastering of recordings are a few areas where AI can assist. Recently, The Beatles utilized AI to isolate John Lennon’s voice from a 1978 demo, removing other instruments and background noises to create a new, well-produced song. Additionally, AI plays a significant role in personalized music experiences on streaming platforms like Spotify and Apple Music, using algorithms to recommend songs based on user listening habits.
The creation of music using AI has sparked both enthusiasm and concern. Tools like BandLab offer unique musical loops based on prompts to help musicians overcome writer’s block. The AI app Endel generates customized soundtracks for focusing, relaxing, or sleeping based on user preferences and biometric data. Furthermore, other AI tools produce complete recordings based on text prompts.
A new YouTube tool powered by Google DeepMind’s large language model Lyria enables users to input a phrase like “A ballad about how opposites attract, upbeat acoustic,” resulting in an instant song snippet resembling Charlie Puth’s style.
These advancements raise various concerns. For instance, the instantaneous creation of a “Charlie Puth song” using AI prompts questions about the impact on musicians like Charlie Puth and aspiring artists who fear being replaced. Additionally, there are ethical considerations regarding AI companies training their large language models on songs without creators’ consent. AI is even capable of resurrecting the voices of deceased individuals, as demonstrated in a new Edith Piaf biopic featuring an AI-created version of her voice. This raises questions about the implications for memory and legacy if any historical voice can be revived.
Even proponents of the technology have expressed apprehension. Edward Newton-Rex, the former vice president of audio at AI company Stability AI, resigned out of concern that he was contributing to job displacement for musicians. He highlighted the issue of AI models being trained on creators’ works without permission, resulting in the creation of new content that competes with the original works.
These issues are likely to be addressed in the legal system in the years to come. Major labels, such as Universal Music Group, have filed lawsuits against startups like Anthropic for AI models producing copyrighted lyrics verbatim. In addition, Sony Music has issued thousands of takedown requests for unauthorized vocal deepfakes. While artists seek to opt out of AI usage entirely, AI companies argue that their use of copyrighted songs falls under “fair use” and is akin to homages, parodies, or cover songs.
Artist Holly Herndon is proactively navigating these transformative changes. In 2021, she created a vocal deepfake of her own voice, named Holly+, allowing others to transform their voices into hers. Her intention is not to compel other artists to surrender their voices, but to encourage them to actively participate in these discussions and claim autonomy in an industry increasingly influenced by tech giants.
Musician Dromgoole, co-founder of the AI company Bronze, envisions AI music evolving beyond mimicking singers’ voices and instantly generating music. Bronze has collaborated with artists like Disclosure and Jai Paul to create ever-evolving AI versions of their music, ensuring that no playback sounds the same. Their goal is not to use AI to create a perfect, marketable static song, but to challenge conventional notions of music. Dromgoole emphasizes that the tech industry’s belief that everyone desires a shortcut or a creative solution does not align with the creative process, as creativity and imagination cannot be expedited.
AI-powered tools for generating text, images, and music have been available for some time. Recently, there has been a surge in the availability of apps that generate AI-made music for consumers.
Like other AI-based tools, products such as Suno and Udio (and potential future ones) function by transforming a user’s input into an output. For instance, inputting “create a rock punk song about my dog eating my homework” on Suno will result in an audio file (see below) that includes instruments and vocals. The output can be saved as an MP3 file.
The underlying AI relies on undisclosed datasets to produce the music. Users have the choice to request AI-generated lyrics or write their own, although some apps recommend that the AI works best when generating both.
The question of who owns the resulting music is important for users of these apps. However, the answer is not simple.
What are the terms of the apps?
Suno offers a free version and a paid service. For users of the free version, Suno retains ownership of the created music. Nevertheless, users are allowed to use the sound recording for lawful, non-commercial purposes, provided they credit Suno.
Paying Suno subscribers are allowed to possess the sound recording as long as they adhere to the terms of service.
Udio does not assert ownership of the content generated by its users and indicates that users are free to use it for any purpose, “as long as the content does not include copyrighted material that [they] do not own or have explicit permission to use”.
How does Australian copyright law come into play?
Although Suno is based in the United States, its terms of service state that users are responsible for adhering to the laws of their own jurisdiction.
For Australian users, despite Suno granting ownership to paid subscribers, the application of Australian copyright law isn’t straightforward. Can an AI-generated sound recording be subject to “ownership” under the law? For this to occur, copyright must be established, and a human author must be identified. Would a user be considered an “author,” or would the sound recording be considered authorless for copyright purposes?
Similar to how this would apply to ChatGPT content, Australian case law stipulates that each work must originate from a human author’s “creative spark” and “independent intellectual effort”.
This is where the issue becomes contentious. A court would likely examine how the sound recording was produced in detail. If the user’s input demonstrated sufficient “creative spark” and “independent intellectual effort,” then authorship might be established.
However, if the input was deemed too distant from the AI’s creation of the sound recording, authorship might not be established. If authorless, there is no copyright, and the sound recording cannot be owned by a user in Australia.
Does the training data violate copyright?
The answer is currently uncertain. Across the globe, there are ongoing legal cases evaluating whether other AI technology (like ChatGPT) has infringed on copyright through the datasets used for training.
The same question applies to AI music generation apps. This is a challenging question to answer due to the secrecy surrounding the datasets used to train these apps. More transparency is necessary, and in the future, licensing structures might be established.
Even if there was a copyright infringement, an exception to copyright known as fair dealing might be relevant in Australia. This allows the reproduction of copyrighted material for specific uses without permission or payment to the owner. One such use is for research or study.
In the US, there is a similar exception called fair use.
What about imitating a known artist?
A concern in the music industry is the use of AI to create new songs that imitate famous singers. For example, other AI technology (not Suno or Udio) can now make Johnny Cash sing Taylor Swift’s “Blank Space.”
Hollywood writers went on strike last year partly to demand guidelines on how AI can be used in their profession. There is now a similar worry about a threat to jobs in the music industry due to the unauthorized use of vocal profiles through AI technology.
In the US, there exists a right of publicity, which applies to any individual but is mainly utilized by celebrities. It gives them the right to sue for the commercial use of their identity or performance.
If someone commercially used an AI-generated voice profile of a US singer without permission in a song, the singer could sue for misappropriation of their voice and likeness.
In Australia, however, there is no such right of publicity. This potentially leaves Australians open to exploitation through new forms of AI, considering the abundance of voices and other materials available on the internet.
AI voice scams are also on the rise, where scammers use AI to impersonate the voice of a loved one in an attempt to extort money.
The swift advancement of this technology prompts the discussion of whether Australia should consider implementing a comparable right of publicity. If such a right were established, it could serve to protect the identity and performance rights of all Australians, as well as provide defense against possible AI voice-related offenses.
Leave a Reply