Science

Creative Work and Copyright

Who owns AI-generated art, who trained the models, and who is paying the price

Learning Objectives

By the end of this module you will be able to:

  • Explain the current legal status of AI-generated creative works under US copyright law.
  • Describe the contested legal territory around using copyrighted works to train AI models.
  • Identify which creative labor market segments face the highest displacement risk and why.
  • Evaluate the democratization argument for AI creative tools — what evidence supports it and what limits it.
  • Distinguish augmentation from automation in creative contexts and explain where the ratio is shifting.

Core Concepts

Under US law, copyright protection attaches only to works created by human beings. Works generated autonomously by AI — without meaningful human creative input — cannot be registered for copyright protection. The DC Circuit Court of Appeals confirmed this in Thaler v. Perlmutter (March 2025), and denied rehearing en banc in May 2025, making the precedent solid for now.

The test for human-AI collaborative works is one of degree: protection is available if the human exercised sufficient creative control over the final expression. The dividing line is not whether AI was involved, but whether a human determined the creative elements.

Prompts alone are not enough

The US Copyright Office has determined that typing a prompt does not constitute sufficient authorship. Because AI systems are non-deterministic — the same prompt produces different outputs — the prompter cannot be said to have determined the creative expression in advance. Human authorship requires more than instruction; it requires control over the resulting expression.

The training data problem

Generative AI models are trained on vast corpora of creative works — visual art, music, written text. In most cases this happened without explicit permission from or compensation to the copyright holders. The legal status of this practice is genuinely unsettled.

AI developers have invoked the fair use doctrine as their defense. The US Copyright Office's May 2025 report takes a nuanced position: fair use is context-dependent. Where AI outputs are substantially similar to training data, there is a "strong argument" for infringement. Where AI-generated content competes with and diminishes licensing opportunities for original creators — in illustration, voice acting, journalism — the fourth fair use factor (market harm) weighs against a fair use defense.

The EU has taken a transparency-first regulatory approach: the EU AI Act, in force since August 2024, requires providers of general-purpose AI models to publicly disclose a sufficiently detailed summary of training data, including what copyrighted works were used.

The authorship identity paradox

Even when humans do exercise significant influence over AI-generated content, they often do not feel like the author. Research on what has been called the "AI Ghostwriter Effect" finds that increased personalization of AI outputs does not increase a user's sense of ownership. This creates an unusual gap between operational authorship (you shaped the output) and perceived authorship (you don't feel you made it).

Major academic publishers have responded by universally prohibiting attribution of authorship to AI tools while requiring explicit disclosure of AI use — on the grounds that authorship carries legal and ethical accountability that only humans can bear.

Narrative Arc

The lawsuits arrive

The abstract legal debate became concrete in January 2023, when artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class action against Stability AI, Midjourney, and DeviantArt. Their claim: their work was used to train image generators without "the three Cs — credit, consent, and compensation." A federal judge allowed core copyright, trademark, and inducement claims to proceed to discovery in August 2024.

Getty Images followed with its own suit against Stability AI, submitting evidence that Stable Diffusion outputs contained distorted Getty watermarks — direct proof that protected images were reproduced, not merely learned from.

In December 2023, the New York Times sued OpenAI and Microsoft, alleging that ChatGPT could produce verbatim or near-verbatim reproductions of NYT articles. The case remains in discovery and motion practice as of April 2025.

In the music industry, the German performing rights society GEMA sued OpenAI in November 2024 and Suno in January 2025. The RIAA and music labels had sued Suno AI and Udio for unauthorized use of copyrighted music in June 2024. Most music copyright holders remain unaware of how their works contributed to training data and have received nothing for that use.

The same systems that generate new creative output were built, at scale, on creative work that was taken without permission. The lawsuits are the industry's attempt to price that debt.

The market responds before the courts do

While litigation moves slowly, the labor market is already adjusting — not in favor of creative workers. Within eight months of ChatGPT's release, demand for automation-prone freelance jobs on platforms like Upwork had declined by 21%, measured across 1.3 million job postings. Freelancer earnings fell by roughly 5% on average.

The music industry faces the largest projected losses. A December 2024 CISAC global study representing over 5 million creators projects that music creators risk losing 24% of their revenues by 2028, with cumulative losses of $12.7 billion over five years. An independent Australian study of 4,274 songwriters and composers arrives at a nearly identical figure: 23% potential damage.

Democratization: a real effect with real limits

The countervailing case is genuine. For novice musicians without formal training, AI music tools measurably lower technical barriers to entry, making composition and production accessible without years of practice or expensive equipment. Digital distribution platforms have eliminated traditional gatekeeping: independent filmmakers no longer need festival selection or studio deals to reach global audiences. New AI-enabled art genres — neural impressionism, algorithmic surrealism, latent-space expressionism — are genuinely new forms that could not exist without computational tools.

But academic research cautions that "democratization" is often more rhetoric than reality. The framing conflates technical accessibility with democratic outcomes: lowered barriers to entry do not automatically produce equitable access, diverse creative voices, or sustainable careers. Without deliberate accessibility efforts, benefits concentrate among groups that already have resources and technical literacy. And the same AI tools enabling solo creators are projected to concentrate $4 billion in streaming and library revenues in technology companies rather than human creators by 2028.

Compare & Contrast

Augmentation vs. automation in creative work

The terms "augmentation" and "automation" carry significant weight in how AI's impact on creative work is framed — and they can mislead.

Current data suggests that 57% of AI adoption in creative work is framed as augmentation and 43% as automation. The augmentation framing — AI as a tool that helps human creators work faster or better — sounds benign. The critical nuance is that augmentation does not prevent displacement. An AI-assisted workflow that lets one person do what three people previously did still eliminates two jobs. Augmentation often reduces total labor time needed even when no individual job title disappears.

The skill-based divergence is sharp. Workers with advanced digital capabilities and strategic thinking skills see productivity gains and earnings stability or growth. Those in automation-prone roles see a 24% decrease in AI-exposed skill demand per quarter. The gap amplifies pre-existing inequalities.

Fig 1
Augmentation (57% of AI adoption) Senior copywriters Brand strategists UX designers Film directors Concept artists Earnings: stable or rising Skills: +15% adoption Automation (43% of AI adoption) Entry-level illustration SEO copywriting Logo / postcard design Music library tracks Stock image creation Earnings: –5% average Demand: –21% to –24% The catch Augmentation still reduces total labor hours required
How AI affects creative labor market segments differently

The democratization claim vs. its limits

What the evidence supportsWhat the evidence complicates
AI lowers technical barriers for novice musicians, visual artists, and solo filmmakersNew barriers emerge: prompt engineering requires language proficiency (especially English) and technical literacy
Digital distribution platforms eliminate traditional gatekeepersAI adoption can paradoxically increase platform market concentration, concentrating value away from creators
AI enables new art genres (neural impressionism, latent-space expressionism)Writing assistance pushes non-Western authors toward Western stylistic norms — homogenizing rather than diversifying
Diverse voices from underrepresented regions can now self-publish and distribute globallyBenefits concentrate among groups with existing resource access unless tools are deliberately made accessible
Narrative diversity expands as low-budget filmmakers challenge studio dominanceProjected revenues concentrated in tech platforms, not the creators whose work trained the models

Boundary Conditions

The current US legal consensus — no copyright for purely AI-generated work, potential copyright for AI-assisted work with sufficient human control — is a snapshot of doctrine in motion. Several conditions could change it:

Pending litigation outcomes. The NYT v. OpenAI, Getty v. Stability AI, and Andersen v. Stability AI cases could produce precedents that either constrain or expand what constitutes permissible training data use. Fair use outcomes are context-dependent: the same training practice that is fair use for one purpose may not be for another.

Jurisdiction matters. The EU AI Act imposes transparency requirements that US law does not. Creators in different jurisdictions operate under substantially different rules, and those rules are diverging rather than converging.

The human control spectrum is blurry. The principle that "sufficient human creative control" enables copyright protection is sound in theory, but the threshold is not defined. A human who selects from 50 AI-generated outputs, edits extensively, and arranges elements into a final composition is in a different position than one who accepted the first output verbatim — but where exactly the line is drawn will be determined case by case.

Where the labor market analysis breaks down

The empirical data on creative labor displacement is real, but it applies most clearly to platform-based freelance markets. In-house creative teams, unionized entertainment workers, and creators in markets with strong collective bargaining may experience different dynamics. The SAG-AFTRA and WGA strikes of 2023 resulted in AI protections in new contracts — an outcome that is not available to unorganized freelancers.

The entry-level pipeline problem

The concentration of AI displacement at entry-level creative jobs is a second-order problem that the immediate earnings data does not capture. Entry-level commissions are how new illustrators, designers, and copywriters build skills and portfolios. If that market disappears, the path to senior creative roles narrows — potentially creating a generation gap in creative expertise even if senior roles remain resilient for now.

Where the democratization argument is strongest and weakest

The democratization case is strongest for genuinely new access — specifically, for creators who previously faced hard technical barriers (learning music theory, mastering drawing fundamentals) and who can now express creative ideas that were previously locked behind skill prerequisites. It is weakest when applied to professional creative markets, where AI's effect is competitive displacement rather than new participation.

The argument is also weakest when the same AI tools producing "democratic access" were built on uncompensated use of the work of professional creators from precisely the communities the democratization narrative claims to serve.

Key Takeaways

  1. AI-generated works cannot be copyrighted under current US law. The human authorship requirement is established doctrine, confirmed by the DC Circuit in 2025. Writing a prompt is not sufficient — human creative control over the final expression is the test.
  2. Training data is the unresolved legal and ethical fault line. AI models were predominantly trained on copyrighted works without consent or compensation. The legal question of whether this constitutes fair use is genuinely contested, with pending litigation and a Copyright Office report that refuses to call it clearly permissible.
  3. Entry-level creative work is being squeezed first and hardest. A 21% drop in demand for automation-prone freelance jobs, a 17% drop in image-generation gigs, and projected 24% music revenue losses by 2028 are measurable, not speculative. The impact is disproportionately concentrated on women, who are overrepresented in affected creative and administrative roles.
  4. Augmentation does not prevent displacement. The majority of AI adoption in creative work is framed as augmentation rather than automation, but augmentation still reduces total labor hours. Skill-based divergence is sharp: those with strategic and digital skills benefit; those competing on commodity output face compressing rates and declining demand.
  5. The democratization case is real but partial. AI genuinely lowers entry barriers for novice creators and enables new art forms and narrative diversity. But technical accessibility is not the same as equitable access, and the rhetoric of democratization can obscure the simultaneous concentration of economic value in platforms and tech companies, built on uncompensated use of professional creators' work.

Further Exploration

Primary legal sources

Labor market research