This article is part of Access Partnership’s series ‘The New Privacy Playbook: Adapting to a Shifting Global Landscape’, which explores the evolving landscape of data governance – highlighting both the obstacles and the innovations emerging across sectors and regions.
When algorithms meet art
Picture yourself at an ABBA concert – not a tribute act, but the band’s younger selves dancing across the stage as holograms. The experience feels magical, almost transcendent. But beneath the surface of these marvelled experiences lies a complex web of questions: Who owns the rights to these digital recreations? Where is the line between inspiration and theft?
This isn’t just a philosophical debate. Creative industries are projected to account for 10 percent of global GDP by 2030. As generative AI reshapes how content is created, distributed, and monetised, copyright has become the battleground where the future of ownership of creative works will be determined.
In 2023 alone, at least 10 major lawsuits were filed against AI developers by authors, artists, and media companies alleging unauthorised use of copyrighted works on an unprecedented scale. Neither side shows signs of backing down.
The global regulatory patchwork
What’s emerging is not a unified approach but a fragmented landscape that reflects different cultural, economic, and legal traditions:
The European Union’s regulatory approach continues to evolve amid industry pushback. Currently, it involves combining text and data mining exceptions with transparency requirements through the AI Act, compelling developers to disclose training datasets.
The United States’ legal framework includes the “fair use” doctrine, which AI developers claim permits broad training on copyrighted works – though courts have begun scrutinising these claims. The February 2024 partial judgement in Thomson Reuters v. Ross ruled that Ross’s use of copyrighted material was not fair use, as it served similar commercial purposes and lacked transformative content.
Meanwhile, South Korea treads carefully, balancing the protection of its influential creative sector (where K-pop and K-dramas generate billions in exports) with its ambition to lead in AI development through guidelines that encourage collaboration between rightsholders and technologists.
The Philippines mirrors this cautionary stance through guidelines that promote seeking explicit permissions when in doubt. By contrast, Japan and Singapore take differing approaches, explicitly permitting commercial AI training under broad copyright exceptions, positioning themselves as attractive hubs for AI innovation.
This regulatory fragmentation creates an uneven playing field that threatens both innovation and rights protection. A system where rules change at every border isn’t sustainable in a digital ecosystem where data flows globally and AI models are trained across jurisdictions.
Beyond the false binary
The conversation around AI and copyright has become polarised. On one side stand those who frame any restriction as an impediment to progress; on the other, those with concerns ranging from creative livelihoods to scientific integrity – including fears that AI systems perpetuate retracted or outdated information due to the lack of data provenance and update mechanisms.
However, technology and creativity have always coexisted, often uneasily at first, but eventually symbiotically. The printing press, photography, recorded music, radio, television, and the internet were initially perceived as threats to existing creative industries. Each ultimately expanded the creative landscape rather than diminishing it.
The question isn’t whether AI will change creative expression – it will – but whether these changes benefit the broader ecosystem rather than just a handful of technology companies.
The third path: adaptive copyright frameworks
What’s needed isn’t a winner-takes-all approach but an adaptive framework that acknowledges both the value of original creative works and the potential of AI technologies. This third path would be characterised by:
Transparency and attribution systems: Requiring AI companies to disclose the source material used in training could help address concerns around appropriation, while potentially creating new revenue streams for copyright holders whose work proves particularly valuable in training contexts.
Tiered usage rights: Not all creative works or AI applications are equal. An adaptive framework would distinguish between different types of uses – research vs. commercial, transformative vs. derivative – with corresponding obligations tailored to impact rather than technology.
Robust marketplace for licensure: Rather than relying solely on litigation or blanket exceptions, new licensing models could emerge that are specifically designed for AI training, potentially including:
- Collective licensing agreements negotiated between creator guilds and AI developers
- Automated micropayment systems that compensate creators when their work substantially influences AI outputs
- Training data marketplaces where creators can opt-in for compensation
Technical safeguards: Watermarking and fingerprinting technologies could help identify when copyrighted works have been used in training or replicated in outputs, creating accountability without requiring manual monitoring.
A blueprint for coexistence
Implementing this vision requires a coordinated approach across stakeholders:
For policymakers, the priority should be creating frameworks that foster dialogue rather than entrenching positions. This means:
- Establishing collaborative governance models: Multi-stakeholder forums where creators, technologists, and public interest representatives jointly develop standards and best practices for responsible AI development – such as the International Committee for Information Technology Standards (INCITS).
- Creating regulatory sandboxes: Safe spaces where new approaches to copyright and AI can be tested without triggering immediate liability, allowing for evidence-based policy development.
- Investing in creator-focused AI: Public funding for AI tools designed specifically to augment human creativity rather than replace it, potentially shifting the narrative from competition to collaboration.
For AI developers, responsible innovation means:
- Transparency and consent mechanisms: Evaluating approaches to dataset transparency (where possible) and creator consent that align with evolving industry standards and legal frameworks.
- Designing for attribution: Developing systems that can trace influences and inspirations in AI outputs, potentially creating new connections between original creators and new audiences.
- Exploring fair compensation models: Working with creators and their representatives to develop compensation mechanisms that share the value generated when creative works contribute to AI capabilities.
For creative communities, engagement rather than resistance means:
- Establishing common ground: Identifying areas where technology companies and creative industries can align on acceptable uses while more contentious issues proceed through legal channels.
- Experimenting with new business models: Exploring how AI technologies can augment rather than replace human creativity, potentially opening new markets and opportunities.
- Developing community standards: Creating industry-specific guidelines for ethical AI use that reflect the unique concerns of different creative sectors.
The Creative Commons we need
The phrase “creative commons” traditionally refers to openly licensed content, but perhaps it’s time to expand its meaning. What we need is a genuine commons: a shared space where technology and creativity coexist and mutually reinforce one another rather than competing in a zero-sum game.
This isn’t about choosing sides. It’s about recognising that we’re building something entirely new – a creative ecosystem that combines human ingenuity with computational capabilities in ways we’re only beginning to understand.
While ABBA may have famously sung about a world where “the winner takes it all”, that approach would be disastrous for the future of creativity. Technology isn’t a weapon to be wielded; it’s an instrument to be played – one that requires many hands working in harmony to produce its most beautiful melodies.
Shaping the future
Access Partnership’s specialised team works at the intersection of technology policy and creative industries. We help organisations navigate this complex landscape through:
- Multi-stakeholder dialogue facilitation between technologists and creative communities
- Policy monitoring and analysis across key jurisdictions
- Development of responsible AI governance frameworks that respect creative rights
- Strategic advocacy for balanced approaches that promote both innovation and creator interests
For organisations interested in shaping the future of copyright in the AI era, contact Öykü Özfırat and Spencer Smith at [email protected] and [email protected].