In the age of generative artificial intelligence (AI), the music industry stands at an inflection point, having to solve the profound question of how we attribute and credit the foundation of creative expression. The capabilities of AI models, like the ones that we have invented at SOMMS.AI, have surpassed our own expectations, giving us novel compositions and genre crossovers that we never anticipated. However, these steps forward bring to the front a critical issue: the necessity of attribution.

Traditionally, the pillars of music’s creative ownership rested on two things: composition and recorded performance. Today, there is a third pillar emerging, “Creative DNA” – the intrinsic manner in which creators “make” things. This DNA, encoded in every musical piece, acts as a signature, unique to its creator. Much like an author echoes their beliefs and experiences in their writing, a musician’s output can express their own identity.

AI models function by training on some amounts of data. In the context of generative music models, this data is trained, and then outputs countless generative compositions, soon-to-be indistinguishable from human recorded output, and reflecting the entire spectrum of human emotion. Once trained, we are left with descriptive models of collective human creative output, which means every piece of music generated is a probabilistic representation of human contributions. To put it plainly, the AI is replicating a mosaic of human creativity, albeit in randomized configurations.

This brings us to the primary challenge. How do we recognize the contributions of individual creators and rights-holders when an AI model generates a piece of music? The difficulty lies in the deep intricacies of machine learning models, particularly the likes of GPT variants, which are colossal in size and operate with billions of parameters. These models create intricate webs of interactions, generating content (text, in GPT’s case) based on complex mathematical functions. In essence, tracing back and attributing a specific output to an individual input is extremely complex.

However, arguing the complexity (or impossibility) of math and statistics as a reason for not pursuing attribution is a red herring.  

While algorithms can produce compositions after training on vast datasets, the essential point to remember is that the success of these models depends wholly on human data. This data is an amalgamation of efforts, talent, and creativity of countless individuals – from Billboard hits to songs with a couple of plays on Soundcloud. A model’s training needs this diversity, enabling it to generalize and create new content drawing from a large corpus of human artistic knowledge. Therefore, arguing that the outputs of such models are purely the product of an algorithm is both intellectually and morally dishonest.

Let’s consider the current system in place for music attribution, known as Performing Rights Organizations (PROs). PROs collect royalties on behalf of songwriters and composers when their music is played publicly. While essential, this system is not an exact science. It operates on estimations and approximations, without precise granularity. Despite all of this, the music industry has functioned quite successfully with such a system, ensuring that more than $1 billion dollars were paid to musicians in 2022, according to ASCAP data. If this has worked for live music, then surely, even if exact attribution is challenging for AI music models, there must be a reasonable starting point. 

We know this is possible, because we’ve invented the equivalent of PROs for our enterprise music customers (record labels, virtual instrument makers and distribution companies), who train their custom music models in our system and use our attribution system to understand who contributed to all outputs they generate.

Moreover, the push for attribution does not solely stem from a financial perspective. At its core, it’s about acknowledging the hours musicians spend perfecting their art, the sacrifices they make, and the passion they baked into the music they create. Even if some tracks did not generate income for its creators, recognizing their contribution becomes a matter of principle.

In an era where “Big AI” is trending, artists must not be reduced to anonymous data points. They deserve recognition for the foundational role they play in training these models. Moreover, Congress must create laws that safeguard artists’ rights, especially when their music becomes part of training datasets for AI models. Ensuring proper attribution and revenue splits for AI-generated music must be a priority.

While it’s tempting to be swept away by the “magic” of generative AI and its possibilities, it’s crucial to ground ourselves in the ethics of its creation. It’s not just about compensating artists for their work; it’s about respect, acknowledgment, and ensuring that the human essence remains at the heart of music, irrespective of how it’s produced.

As we stand on the cusp of an explosion of AI in music, its industry, and Congress, have a responsibility. The conversation around AI and music should not just be about technology’s capabilities but also about its conscience. Generative AI, in all its domains, must move forward with a commitment to honoring the human creativity upon which it is built, and stands.