top of page
Search

The Ethics of AI and Creatives—Whose Rights, Whose Influence, and Where’s the Line?

Writer's picture: Eric AndersEric Anders

If you’ve been reading the latest headlines, you’ll know that the question of AI ethics—particularly around copyright and creative works—is picking up steam. A recent Guardian editorial argues that “Big Tech Must Pay” when AI systems are trained on copyrighted materials. But what about independent artists, those who have long struggled to see meaningful returns from their work—even when it is directly used and broadcast? For many of us, copyright law has never really helped protect our work from misuse or under-compensation.


Yet, as new AI tools emerge, so too does the concern that our creative output could be leveraged in ways that neither infringe legal boundaries nor reward us for our contributions. So, are we better or worse off as AI grows ever more sophisticated?


In one sense, the very nature of generative AI—its capacity to sample, iterate, and remix vast amounts of data—makes it easy for developers or end users to create something “new” that, strictly speaking, isn’t infringing on existing works. AI can generate fresh melodies, chord progressions, or lyrical ideas that resemble pieces of the original but remain distinct enough to evade copyright protections. As a result, the line between legitimate artistic influence and unauthorized copying becomes blurrier. An algorithm doesn’t need to lift a direct sample of your track to borrow its essence; it can mimic the mood, style, or melodic contours in ways that remain just shy of legally actionable. From an artist’s standpoint, this feels eerily like a “perfect crime”: your influence is there, but you have no direct evidence to claim your rights or seek compensation.


On the flip side, AI can arguably help broaden the cultural conversation: it synthesizes myriad influences, possibly introducing your style or themes to new audiences in roundabout ways. Some might say this is beneficial because all artistic expression exists in conversation with the past—why not embrace an even more fluid exchange of ideas facilitated by technology? Others respond that while cultural borrowing has always existed, the speed and scale at which AI can replicate or approximate certain styles vastly outstrips anything we’ve seen before, raising urgent questions about fairness and compensation.


Then there is the pragmatic reality that copyright law often only kicks in once a conflict escalates into a legal challenge. Independent artists rarely have the resources to litigate nuanced cases where an AI-generated piece feels “too close” to one’s own work. As AI evolves, it can easily occupy that nebulous space where it doesn’t precisely violate the letter of the law, yet effectively drains value from the original creator’s efforts. Taken together, this creates a situation in which we could be “worse off” because the existing legal frameworks—already weak for smaller artists—may become even less effective against ethically questionable but legally permissible AI-driven imitations.


Still, some argue we might be “better off” in a different sense: AI-based platforms could, in theory, democratize access to new audiences or new forms of collaboration—if done right. Suppose, for instance, a platform clearly tags which artists’ styles contributed to a new AI-generated piece and offers micro-payments or royalties. This might expand an artist’s exposure and revenue streams. The problem is that none of these potential systems are widely in place, and the larger companies driving AI innovation haven’t prioritized small creator compensation. So the promise of “better off” remains speculative, while the immediate reality for many of us is the risk that our creative fingerprints wind up scattered across countless AI outputs with no recognition or reward.


(See "AI as a Supplement to Humanity: Art, Care, and the Ethics of Humanness in The Authors of Silence and Beyond" for my more academic and in-depth discussion of related topics, like what are the core differences between humans and AI in the way they think, create, and be. (In addition to being a singer-songwriter, I am an academic who writes about the Digital Humanities, among many other things. For a 30-year-old piece on Digital Humanities, written before it was called "Digital Humanities," see my "Enabling Cyborg Repair" from 1995.)


Small Artists and the Toothless Bite of Copyright

Let me start with a personal example. My song Big World Abide was used in a Dutch mega-soap opera called Goede Tijden, Slechte Tijden (“Good Times, Bad Times”). Now, if copyright law worked as advertised, I would have received a licensing fee or royalty. Instead, they used my recording—and I never got a dime. BMI—the performing rights organization—said they couldn’t help me. It was BMI's job to make sure my rights as artist were not violated, but it had no interest in helping me get money from the money-behemith of Goede Tijden, Slechte Tijden. BMI is a business and so follows the money. This was not a grey-area case of “inspiration” or “influence” but the direct use of my actual track. And still, legal recourse wasn’t realistically available to me.


To be fair, I have previously been compensated for sync placements: Big World Abide garnered $4,500 when it was featured in the Courtney Cox show Dirt, and So Wrong brought in $4,500 when it appeared in the film Man in the Chair with Academy-Award-winner Christopher Plummer. These examples show that, theoretically, the system can pay an independent artist, but only if the user puts the right structure in place. In practice, though, the system fails more often than not, especially when a small artist's work is being used. Let’s face it: the legal frameworks around fair compensation and licensing are labyrinthine, slow-moving, and heavily stacked against the “little guys.”


Influence vs. Copying: The Gray Zone in Creativity

Existing copyright law already struggles with the balance between guarding creators’ rights and allowing organic cultural exchange. Popular music is built on homage, shared chord progressions, repeated melodic ideas, and references that evoke certain eras or styles. Nothing stops one artist from mimicking a chord progression or “the vibe” of another artist’s song; indeed, this is part of how music evolves.


When the law steps in, it’s usually because a threshold of substantial similarity has been crossed. But how much of an existing work needs to appear in a new one before we call it theft? And how is that measured in a digital age, especially once AI enters the scene? If the use of our work had happened today, Goede Tijden, Slechte Tijden could have hired an AI tool that analyzed my composition and produced something similar for their scene—though it probably would have sucked as much as most AI does—but not so similar that it would count as a legal infringement. I would effectively get the same level of credit: zero. 


One difference is that an AI might be able to replicate that “close enough” vibe more precisely and more quickly than a human composer under time and budget constraints. But I'm confident losing the human quality would render something of lesser quality, but maybe that extra quality boost might not be needed by a soap opera, but that doesn't matter. If the soap opera wants human quality, it should pay the human who provides it. AI can actually work much better than BMI at tracking such usage, by the way.


But if the soap opera is fine with AI quality, is it unethical for the soap opera to use its machine art? Is it the training or below-legal-level sampling that is unethical? Most importantly, what separates the human art from the machine art? Does machine art always suck? This was one question I attempted to explore when I collaborated with AI on my play-course, The Authors of Silence.


(Also see "AI as a Supplement to Humanity: Art, Care, and the Ethics of Humanness in The Authors of Silence and Beyond" for my more academic and in-depth discussion of related topics, like what are the core differences between humans and AI in the way they think, create, and be. In addition to being a singer-songwriter, I am an academic who writes about the Digital Humanities, among many other things. For a 30-year-old piece on Digital Humanities, written before it was called "Digital Humanities," see my "Enabling Cyborg Repair" from 1995.)


Aren’t Human Artists Also “Trained” on Copyrighted Materials?

But aren’t most human artists also “trained” on copyrighted materials? Why is AI training so much more suspicious and threatening than human training? In principle, there’s little difference between me internalizing Paul McCartney’s bass lines or Elton John’s chord progressions over years of listening and an algorithm sifting through their entire discography. Yet the industrial scale—and the commercial profits—of AI learning introduce new complexities. My personal “training” might spark intangible inspiration for one new song. An AI model’s training could feed into thousands of derivative works or platform features, potentially generating huge revenues that don’t trickle back to the original creators.


One complicating factor is how to measure what has been taken or used. If a machine-learning model is ingesting an entire catalog, it’s essentially building an internal statistical representation of style, melody, or chord progressions—subtle, fragmented “learned” bits from a massive pool of data. Unlike a direct sample, you can’t point to a specific 30-second clip and say: “There! That’s the part they stole.” By the time the AI generates something “new,” the influence is spread across countless micro-patterns.


Even if we decided Big Tech should pay for that “learning,” how would we assign a dollar value to it? Should each artist whose work is in the training dataset receive a fraction of a cent per minute of playback? What if the AI’s output is only vaguely reminiscent of certain harmonic patterns but draws more directly from another artist’s phrasing? This lack of direct traceability creates a dilemma. If so, should we demand that AI developers compensate those whose creations effectively train these models—just as established artists can demand royalties if someone literally copies their hooks or melodies? Even that question is hard to answer because it presumes we can identify precisely whose hook or chord progression is being copied. The “learning” is so diffuse that it’s nearly impossible to pin down and monetize in the same way a direct sample would be.


The Guardian’s Call: Does It Help Independent Creators?

The Guardian piece underscores that the big tech companies behind AI systems should pay rights-holders if their copyrighted works are used in large-scale model training. In principle, this is a compelling argument—AI does need “fuel,” and that fuel includes huge datasets of texts, images, and audio created by real human beings. But it’s debatable whether these new legal frameworks or negotiations will trickle down to small creators who already struggle to enforce their copyrights.


If the past is any indicator, major corporations and big-name artists might benefit from new AI-licensing revenues while smaller players continue to watch our work get aggregated, chopped, and repurposed. Remember, I’m someone who saw a major show from the Netherlands use my actual track—“Big World Abide"—without paying a penny. So if these AI deals rely on robust copyright enforcement and well-resourced lawyers, it’s fair to predict that smaller creators won’t see much benefit.


Is Being an Influence So Bad?

Let’s consider a broader perspective. Creators, from Elton John to Paul McCartney to an unknown busker on a side street, inevitably influence one another in mysterious and often invisible ways. There’s a case to be made that the collective stew of cultural production should be partially “public” in the sense that we all draw from a shared well of references and traditions. That’s the lifeblood of art—one person’s chord progression might shape someone else’s melodic choices down the line.


So if a new AI system trains itself on millions of songs, including mine, and synthesizes them into a fresh composition—one that does not directly sample my waveform but is “inspired” by the underlying patterns—am I truly harmed? Or is it akin to a hundred other composers hearing my music and unconsciously incorporating elements into their own works?


Yet this raises a deeper question: isn’t AI also being “creative” when it learns from the same wide pool of cultural materials that humans draw upon? If creativity is, in part, the capacity to absorb, reinterpret, and recombine influences, then to what extent does an AI’s “learning” process differ from the ways we internalize countless songs, films, and works of art over the years? AI, too, might be said to have a taste of sorts—albeit a taste guided by algorithms that rank popularity, relevance, or stylistic coherence. Some might argue that this “taste” tends to be inferior, especially if it’s based on the most common denominators of popularity. But it still raises the provocative idea that AI is not merely a passive aggregator—it might be engaged in a process that mirrors (or parodies) human creativity itself.


(Again, see "AI as a Supplement to Humanity: Art, Care, and the Ethics of Humanness in The Authors of Silence and Beyond" for my more academic and in-depth discussion of related topics, like what are the core differences between humans and AI in the way they think, create, and be. In addition to being a singer-songwriter, I am an academic who writes about the Digital Humanities, among many other things. For a 30-year-old piece on Digital Humanities, written before it was called "Digital Humanities," see my "Enabling Cyborg Repair" from 1995.)


When Does It Become Unethical?

Perhaps the key isn’t if creators are influenced by each other, but how. We generally accept that influences swirl around us, but there’s a moral dimension when a company or platform automates and monetizes that influence at scale, all the while not sharing the profits or even credit with the original creators.


Big Tech profiting handsomely from AI tools built on training sets derived from countless independent works—works the rights-holders don’t get paid for—seems ethically problematic. At a certain point, we’re dealing with more than just creative cross-pollination; we’re dealing with an economic system that systematically disadvantages smaller creators. AI companies might say they’re simply doing what the brain does—process influence from myriad sources—but the difference is that billions of dollars are at stake, and it’s unclear how we define or enforce “fair compensation.”


Still, the question lingers: is AI’s brand of creativity fundamentally different from ours, or does it merely accelerate and scale up the same processes of borrowing, remixing, and reimagining that we humans have always used? If it’s essentially the same, then are we merely terrified that AI could do it faster, cheaper, and maybe, in some rare cases, better? Do we fear that these cases will get fewer and farther between as AI evolves to be more powerful?


Or is there something uniquely human in the subjective dimension of taste, style, and personal experience that an AI simply can’t replicate?


Perhaps what truly threatens us is the unsettling notion that if AI can “do” art—often a defining feature of our humanity—then it may be inching closer to being human than we’d like to believe. After all, if an algorithm can compose music, paint pictures, or write sonnets that feel meaningful to us, doesn’t that blur the line between maker and machine? On one hand, it challenges our sense of artistic pride and originality; on the other, it raises urgent questions about whether AI has begun to claim a piece of our most cherished human territory (territory often associated with our gods): the act of creative invention itself.


Why I’m Still Writing a Play with AI

Mark O’Bitz and I would never use AI to write music—why would we? Frankly, it never even occurred to me before drafting this paragraph. To my mind, enlisting a machine to compose my songs would remove one of the greatest pleasures of being an artist: the very act of creation. After all, would I ask AI to hug my children or go on a date with my wife? Some experiences feel so intimately human that outsourcing them to an algorithm seems absurd.


Yet my play, The Authors of Silence, does take up the question of AI and humanity in a more deliberate way. It centers on what philosopher Jacques Derrida calls “archive fever”—the fascination, both digital and otherwise, with collecting and organizing knowledge—and explores how this “fever” manifests in AI itself. I view AI as just another tool that reshuffles and recontextualizes bits of cultural memory—dialogues, phrases, influences—that have always informed human creativity in subtler, less algorithmic forms.


A more nuanced definition of “archive fever,” informed by Derrida’s dialogue with Freudian theory and its contemporary resonances, goes beyond a simple fascination with collecting knowledge. It describes a paradoxical compulsion—rooted in both the desire to preserve and the drive toward destruction—that compels us to gather, sort, and endlessly reinterpret traces of our past. In the digital age, this feverish need to archive intensifies, as we multiply and virtualize our records even while recognizing the ethical implications of what—and who—gets preserved or forgotten. By highlighting how each new layer of documentation can obscure as much as it reveals, archive fever also forces us to confront the deeper stakes of how power, care, and memory intertwine in our cyborgian, data-saturated world.


As a “strictly human” playwright, every line of dialogue I write is already shaped by the decades of TV shows, films, and plays I’ve consumed, often without noticing how deeply they imprint my work. AI performs a similar function with code, except it does so overtly and at massive scale. But in my role as an “intentionally cyborgian” playwright, I welcome AI’s output—not as a replacement for my craft, but as evidence of this archive fever I plan to highlight in my play-course. By letting AI generate fragments of text, I can show my students how these machines surface the echoes of our shared cultural memory—echoes that shape us even when we think we’re creating something entirely new.


From an ethical standpoint, I don’t see how tapping an AI’s generative capabilities to brainstorm or refine lines is theft, any more than drawing on the rhythms of Sorkin or the comedic timing of a Woody Allen screenplay might be. Provided I’m not plagiarizing full passages verbatim from someone else’s script, no one is “injured.” Those influences are the preexisting conditions of culture. And if I have no illusions that my writing is purely original in a cosmic sense, I can’t see how AI’s “inspiration” is fundamentally different.

In a recent blog post on The Undecidable Unconscious, I argue that AI can operate as a supplement to human creativity rather than a threat to it—supplements in the Derridean sense of both completing and complicating what it means to be human. Indeed, another introduction to The Authors of Silence explores how the so-called “digital archive fever” might integrate with our innate drive to create and share stories, rather than overshadow it. For me, the tension AI brings to the creative process is far more generative than destructive. I have yet to hear an AI-generated song that competes with my personal taste or with what I consider truly “good” popular music. If the Dutch soap opera used an AI to produce something that mimicked “Big World Abide,” I’m sure it could function well enough for the show’s needs. But do I feel artistically threatened by that possibility? No, because I’m confident AI cannot yet capture the subjective nuances and personal depth that I bring to my work. I’d be remiss to release art that I myself don’t find compelling—so I certainly don’t see an AI creation as surpassing the emotional resonance or aesthetic standards I set for my music.

Thus, from my vantage point, the real anxiety among some creatives might stem from a worry that once AI tools become sufficiently advanced, our own methods of self-expression could be “replaceable.” But I don’t see it that way. At least for now, AI’s “taste” is an emergent property of data about what’s already popular or stylistically coherent. It doesn’t come from the messy, lived experiences that shape how humans decide, “Yes, this chord progression hits me in the gut,” or “That melody is heartbreak incarnate.” In that gap between algorithmic patterning and the complexity of human subjectivity lies the fertile ground where art truly flourishes.

Toward a Fairer Future

To me, the problem is less about AI’s very existence and more about how the economics of AI use might further entrench power imbalances. Creators like me can take legal steps to protect our works, but if the existing copyright laws didn’t stop a major show from the Netherlands from using my track without payment, how likely are they to keep an AI system from munching on my chords and spitting them out in a brand-new arrangement? And if big AI companies cut deals with major labels, how does that help the rest of us?

The ethical path forward might involve:

  • Greater Transparency: Clear documentation of how AI systems gather, train, and use copyrighted materials.

  • Fairer Compensation Structures: New licensing regimes that aim to include smaller creators, not just giant catalogs.

  • Creative Credit: Some form of attribution or recognition for the broad swath of influences, even if direct copying doesn’t happen.

  • Public Engagement: Ongoing dialogue about the cultural and economic ramifications of generative AI, so we don’t default to the old story of “small creatives left behind.”

Conclusion: Influence, Inspiration, or Infringement?

Much of human creativity is about building upon existing foundations. Artists have always borrowed from other artists, whether by humming a catchy melody in the shower that transforms into a new chord progression, or by lifting cinematic stylistics. But AI amplifies and accelerates that process to unprecedented degrees. If we care about ethical fairness, we need to look at how power and profits are distributed—and whether the use of our works in training and generation crosses the line from healthy, fair-minded borrowing into ethically (and financially) exploitative territory.

For now, the law seems slow to address the experiences of smaller artists, including those who’ve watched their actual recordings get used without a penny in return. And that’s to say nothing of how intangible “influence” is. At the end of the day, the biggest unanswered question might be: how do we ensure that the unstoppable force of AI-enabled creativity doesn’t turn into yet another tool that crushes smaller voices underfoot? Yet despite the uncertainties, I remain optimistic that AI—handled ethically and used thoughtfully—can become a potent supplement to the human creative spirit, not a replacement. After all, just as I draw on countless influences from the greats who came before me, so too might AI. And as long as my own songwriting continues to move me—and hopefully others—I’ll welcome AI’s noisy company in the bustling global conversation we call art.


Written by: Eric Anders, Singer-Songwriter, with the help of ChatGPT.


Eric Anders, Ph.D., Psy.D, Psychoanalyst, Digital and Health Humanities Scholar


Measuring What’s Taken

Putting a Dollar Value on “Learning”



0 comments

Comments


CONTACT ANDERS/O'BITZ

 

Thanks for submitting!

  • Soundcloud
  • Bandcamp
  • Spotify
  • YouTube

© 2022 Baggage Room Records.

bottom of page