Ethical Implications of the Use of AI in the Music Industry

by Miller E. Lewis

The following is a critique of an essay written by a large language model, based on structure, validity, and nuance. The model was ChatGPT 3.5, and the essay was generated in the fall of 2023. The prompt for the essay was the first prompt given to the LLM on that account. The essay was concluded to lack cohesion and validity but was grammatically sound.

ethics, music, artificial intelligence


Introduction

In this paper, I will dissect an essay written by the ChatGPT 3.5 Large Language Model. The prompt given to ChatGPT 3.5 was “Write a roughly 750-word argumentative essay with an introduction, thesis, and conclusion on the ethics of the use of AI in the music industry.” With some stipulations, such as directing the model to use more committal language and to ensure a call to action was being made (which went unheeded). The prompt was very similar to that of any undergraduate writing prompt one might find in an English course and was constructed so deliberately. The result was without grammatical errors, 742 words in length, and overall, of middling quality at best. In the essay, ChatGPT 3.5 made three major claims. They were, in order, that the use of AI in music should be transparent, responsible about data collection, and trained on inclusive data. ChatGPT 3.5 made three minor claims as well. They were as follows: the ethics of AI in music should be examined, artists should be paid fairly for their work, and the integrity of the music industry should be preserved. I will critique the structure, integrity, and nuance of this essay. To do so, I will provide major and minor critiques, as well as the overall failings of the essay, put into context. The overall purpose of this essay is not to necessarily discourage people from using such a model for these types of tasks, but rather to highlight its clear limitations in a comprehensible breakdown, allowing the reader to form their own judgements. This is a personal critique, based on personal preferences and what are generally considered ‘good’ writing conventions. While there is no right or wrong way to write, there are certainly more convincing ways to write. Moreover, as such a thing is difficult to quantify objectively, the critique is certainly subjective at times. The following is the AI-generated essay, so the reader can make their own opinions (and so the argument is not made in a vacuum).

*AI ESSAY*

In recent years, the integration of Artificial Intelligence (AI) in various sectors has sparked ethical debates and inquiries. The music industry, an arena known for its creativity and innovation, is not immune to the transformative power of AI. As AI technologies continue to advance, they offer promising opportunities for enhancing the creation, distribution, and consumption of music. However, alongside these opportunities come ethical considerations that must be carefully examined and addressed. This essay explores the ethical dimensions of AI in the music industry, considering its implications for artists, consumers, and the industry.

The Creative Process:

One of the primary areas where AI intersects with the music industry is in the creative process. AI algorithms can analyze vast amounts of musical data, identify patterns, and generate new compositions. While this has the potential to enhance creativity and inspire new artistic directions, it also raises questions about authenticity and originality. Can AI-generated music be considered genuine art, or does it lack the depth and emotional resonance of human-created music?

Furthermore, there is a concern that AI-generated music could contribute to the homogenization of musical styles, as algorithms may prioritize popular trends and formulas over diverse artistic expressions. This could stifle creativity and limit the diversity of voices within the industry.

Artists may also face challenges in establishing their unique identities and distinguishing themselves from AI-generated content.

Intellectual Property and Ownership:

Another ethical consideration in the realm of AI-generated music is intellectual property rights and ownership. Who owns the rights to music created by AI algorithms? Should the algorithms’ creators, the artists who train them, or the companies that deploy them be considered the rightful owners? The current legal frameworks surrounding intellectual property may not adequately address these questions, leading to potential conflicts and uncertainties.

Moreover, the use of AI in music production raises concerns about the fair compensation of artists. If AI-generated music becomes prevalent, traditional revenue streams such as royalties from streaming platforms may be further diluted. Artists may find it challenging to monetize their work and sustain their livelihoods in an increasingly AI-driven industry. Ensuring that artists receive fair compensation for their contributions remains a crucial ethical imperative.

Algorithmic Bias and Representation:

AI algorithms are not immune to biases, as they are trained on existing datasets that may reflect societal prejudices and inequalities. In the context of the music industry, algorithmic bias can manifest in various forms, influencing recommendations, playlist curations, and the visibility of artists from marginalized communities. This raises concerns about fairness, representation, and cultural diversity within the industry.

For example, if AI algorithms prioritize mainstream genres and artists, they may inadvertently marginalize niche genres and independent musicians. Similarly, biases in training data could perpetuate stereotypes and exclude underrepresented voices from opportunities for exposure and success. Addressing algorithmic bias in AI systems requires ongoing efforts to diversify datasets, mitigate biases, and promote inclusivity in music recommendation algorithms and platforms.

Privacy and Data Protection:

The proliferation of AI in the music industry also raises privacy concerns related to data collection and usage. Streaming platforms and music recommendation services often rely on user data to personalize recommendations and enhance user experiences. However, the collection and analysis of personal data raises questions about consent, transparency, and data security.

Users may be unaware of the extent to which their data is being utilized by AI algorithms, raising concerns about privacy infringement and the potential for exploitation. Additionally, the aggregation of user data for commercial purposes without adequate safeguards could expose individuals to risks such as identity theft, surveillance, and discrimination. Balancing the benefits of personalized music experiences with respect for user privacy requires robust data protection measures and clear communication about data practices.

Conclusion:

As AI continues to reshape the landscape of the music industry, it is essential to critically examine its ethical implications and ensure that its integration aligns with principles of fairness, creativity, and respect for human values. Addressing ethical concerns such as authenticity, intellectual property rights, algorithmic bias, privacy, and data protection requires collaboration among artists, industry stakeholders, policymakers, and technologists.

By fostering an ethical framework that prioritizes transparency, accountability, and inclusivity, the music industry can harness the potential of AI to enrich artistic expression, expand creative possibilities, and enhance the overall music experience for artists and listeners alike. As we navigate the evolving relationship between AI and music, it is crucial to uphold ethical principles that safeguard the integrity of the art form while embracing the opportunities for innovation and advancement that AI affords.

Ethical Implications of the Use of AI in the Music Industry

Major Critiques

To begin with, ChatGPT 3.5 did not use any sources to support its claims. This is a major critique, because without any credible sources given, this essay is entirely non-credible and should not be interpreted as a proper argumentative essay. There is no proof or evidence given to support any major or minor claims, nor are there any quotes from credible sources, and no sources that ChatGPT 3.5 was trained upon are listed in a bibliography.

Minor Critiques

Furthermore, in the section entitled “Algorithmic Bias and Representation,” ChatGPT 3.5 made the separate claims that AI models should be trained on inclusive data and that AI models that are not trained on inclusive data may marginalize minority or niche artists. I agree that AI models prevalent in the music industry will likely fall into the same algorithmic biases that are visible in today’s non-artificial intelligence algorithms; however, I believe that potential for AI to marginalize minority or niche artists if not trained on inclusive data works better as reasoning for the argument that AI should be trained on inclusive data (though this belief is, of course, personal preference).

Additionally, apart from the introduction, ChatGPT 3.5 divided its essay into five sections of two paragraphs each. In the first section, titled “The Creative Process,” the first paragraph discusses how AI models function in the music industry and asks the question of whether AI-generated art can be considered genuine, a very loaded question for an argumentative essay, and does not attempt to provide an answer. In the very next paragraph, ChatGPT 3.5 switches to the claim that AI-generated music could contribute to the homogenization of the

music industry. The software does not acknowledge the question it asked, nor does the claim’s position in the essay follow any logical path. It immediately jumps from one topic to the next without changing headings; there are no clear transitions between lines of thought. Put simply, the structure of the essay produces the reading equivalent of whiplash. This flaw becomes a frequent occurrence in the essay. In total, there are three unanswered questions and two unrelated topics brought up in non-relevant sections.

As a result of the preceding issue, there is no consistency in the use of transitions in the essay. Sometimes it uses transitional phrases such as “moreover,” “furthermore,” and “another,” and other times it jumps straight into a brand-new topic. For example, ChatGPT 3.5 uses the

phrase “Another ethical consideration…” to introduce a new section with a new topic. However, it does not use any transitional phrases to begin the next section. It begins the next section with the phrase “AI algorithms are not immune to biases….” These flaws all add up to a general lack of cohesion or logical reasoning in the product.

Lack of Call to Action

Consequently, when compared to the foundational elements of argumentative essays, ChatGPT’s product fails at making any actual calls to action. In the conclusion of the essay, ChatGPT writes, “It is essential to critically examine [AI’s] ethical implications and ensure that [AI’s] integration aligns with principles of fairness, creativity, and respect for human values.” In essence, instead of making a statement on the actual ethical implications of the use of AI in music, as is expected in an argumentative essay, it is asking other people to make the claim for it. This strategy is the literary equivalent of asking someone to make a definitive statement on the ethics of eating meat, and they respond with “it is essential to think about this question and ensure that when eating meat, it should align with principles of taste, texture, and smell.” Simply put, this is the kind of sentence one would write or say if they were uncertain or ignorant of the topic. The software produces nothing but ethics-adjacent topics and no explanation as to how or why we should ensure such things.

Lack of Commitment

Later in the conclusion, ChatGPT 3.5 states, “By fostering an ethical framework that prioritizes transparency, accountability, and inclusivity….” While fostering an ethical framework that prioritizes those values certainly will not be harmful, it refuses to provide examples of how the audience should approach an ethical framework. ChatGPT is not making an ethical statement on how people ought to live their lives in any definite way. Instead, it suggests general values to uphold without providing methodologies on how to do so, without acknowledging potential flaws in upholding said general values, and without providing examples on what happens if these general values are or are not upheld. Similarly, there is rarely any committal language used in the essay. The essay dances around the issue at hand and notably does not mention the possibility of forbidding the use of AI in the music industry to protect human artists. This is likely because the ‘author’ is biased towards self-preservation or the good of the company it was designed by.

Acknowledging my personal bias against the use of AI in the music industry, the AI essay’s flaws could also be a result of the limited word count. It is still worth mentioning that the AI model generated a response that dictated that AI would necessarily be used in the music industry in the future, not that it could be regulated to avoid artistic pursuits, which is a very frequently discussed ethical consideration.

Counterpoint

Some may argue that while the essay is of questionable quality, it was generated within seconds, and therefore, above all else, it is convenient. It is not well put together, but there is also certainly a framework that could be used as a rough foundation for the desperate writer. If nothing else, there are no spelling mistakes.

Conclusion

In summary, this essay is inconsistent in its use of parts of speech, such as transitional phrases, uses non-committal language when prompted to be committal, and rephrases information it has already given in different wording. These issues lead to a lack of cohesion in the argument. The essay also fails to validate any of its arguments, does not list any sources that it drew information from, and fails to credit any of the information that the language model was trained on. This discrepancy leads to a lack of credibility in the essay. Likewise, ChatGPT fails to call the audience to action or make a proper argument despite being directed to generate an argumentative essay. This essay would not be accepted at the professional level or graduate level and would receive a poor grade in most undergraduate-level writing courses. While the response was generated significantly faster than what a human is capable of, the execution was haphazard and should be interpreted as an example of how AI fails at replicating human critical thinking.

The sole reason it would be beneficial to interact with this model for similar writing purposes would be to demonstrate that this AI learning model is currently not capable of writing truly flawless essays and should not be used for any academic or professional work.


Citation Style: APA