top of page

AI in Music: Creative Threat or Artistic Ally

  • Akshat Dhotre
  • Apr 15
  • 4 min read

by Akshat Dhotre



“Artificial Intelligence is on the path to take up everyone’s jobs in the future” is a common phrase uttered by all those technologically conservative people these days. You know, the ones that aren’t willing to upskill themselves with the progressing times. In my belief, it’s the ones that are too self-limited to see the wider possibilities that will complain about technology getting ‘too advanced’.


People scared of AI taking over everything in our lives don’t realise that we are already becoming dependent on it. Our lives have been simplified and we have started taking AI as the main source of information of late. Before we even knew the meaning of ‘AI’, it had begun interweaving with our lives. As simple as image recognition (your Google Lens which was launched in 2017 by the way) to Siri and Alexa (launched in 2010 and 2014 respectively) are all examples of AI being used in your daily life. Today, Artificial Intelligence is more on the face.  For example, when we search something on Google, the first result is a concise summary of the results of the internet’s top results given by Gemini. For musicians, Ableton has come up with machine based melody generation and stem splitting. The stem splitting has also been developed to a good extent by Apple in their Logic Pro software. The point is that, we already are becoming symbiotic with AI unknowingly, willingly or unwillingly.


Keeping Artificial Intelligence aside, let’s look at its more archaic form: Machine Learning. This is an aspect of technology that paved the way for AI. Machine Learning involves writing codes for machines that will help them to gain more efficiency through provision of as much data as possible. Essentially, the idea is to provide information to become more knowledgeable - thus efficient, literally like the entire concept of education.


Now you might ask, how is it revolutionising Music as a space? To truly understand its usability and implementation, we must not only look at Music in its singular form, but also its related, not auxiliary, but supporting fields as well.


Let’s start with the most interesting implementation, which is interactive visuals. To put it down in the most layman terms: this field is a result of Music being the main parameter that changes visuals. Machine Learning is used to make musical parameters as data points, like using a certain frequency band at a certain amplitude can make shape or colour changes in the visuals. 


Now you might ask how Machine Learning helps change this field. People have used it in order to fully automate the process of the visuals. Generally, one person would be handling the visuals while one performs the music; but advancements in technology have enabled artists to take complete control and gain freedom. This has resulted in so much expressionism being cultivated in the field of Music. Softwares like Max have brought a paradigm shift for the entire Music scene. In personal experience, a close friend and mentor used Max and provided data points of colour intensity and the kind of sound that will trigger the same, and used it to create live visuals.


With the onset of the Artificial Intelligence race, the music technology nerds have adapted to it too, resulting in the rise of startups like Suno.ai and VocalRemover. These are some of the most groundbreaking companies that have not only helped make music, but also nudged artists to push their boundaries so that their career is not consumed by a computer. Musicians are using AI-based softwares to find inspiration, by getting a bare-bones idea, and then building up on it. Websites like VocalRemover are using AI to separate vocals and the backing tracks for the ease of musicians to figure out the elements in the song. But, on the flip side, internet pirates, and even worse, the trashy remix artists, can use the separate elements to destroy the once beautiful creation that it was.


The usage of the so-called ‘Artificial Intelligence’ is already widespread, but under-utilised in the Music field. Companies generally apply the tactic of pushing their most widely acceptable and the biggest relief product to consumers. For example, OpenAI’s best-selling software is Chat-GPT, but they also have a plethora of other softwares for coding, video editing, office presentations, etc that are overlooked because OpenAI is aggressively marketing their MLM the most. But that’s where we as the users should look beyond and utilise it for something that is not marketed. For example, the people in my class have used VocalRemover the most to get acapella of songs to use for assignments. With so much resources available at their fingertips, they are still suspicious of its pedigree and its capabilities. They are limiting themselves by not utilising data from all across the globe to craft songs that will have the maximum reach to their target audiences.


As the corporate people say, the people who can give the best prompts are the ones that will excel along with AI in their respective fields. It is high time that musicians too learnt to adapt to the technology and learn ‘Prompt Engineering’ [for the ones that don’t know what Prompt Engineering means, it is a glorified term of being able to provide the right prompts to the AI to obtain the desired outputs]


With such rapid development in technology, we, the common people, have been left behind in obscurity. The demand from the world as a whole is burdening us with more responsibilities. Learning and keeping up with new developments has not just become a thing for the big companies to do, but also the regular man. Our own wants to make our lives easier have also landed us in jeopardy of losing our livelihoods. So, the only option is to upskill/upgrade yourself and co-exist with the technology, or else get lost in the sea of the pre-existing.


Comments


bottom of page