TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Notes from the FTC's Creative Economy and Generative AI Roundtable Related reading: Second global AI Safety Summit drawing less global interest

rss_feed

""

On 6 Oct., the U.S. Federal Trade Commission hosted a virtual roundtable for members of various creative industries, including the entertainment industry, to share the impact that generative artificial intelligence has had on their professions. 

After remarks from Chair Lina Khan and Commissioner Rebecca Kelly Slaughter about the importance of developing policy around a quickly growing, largely unregulated market, the 12 industry speakers discussed how generative AI tools have reshaped creative lines of work and how they have been responding to these changes. 

Recurring themes

In light of the Writers Guild of America and SAG-AFTRA strikes this year, the speakers noted that generative AI has been top of mind, especially as these AI tools rely on large, diverse sets of human-generated content that are collected without consent. The speakers, notably not in the privacy or AI spaces, repeated several themes and highlighted specific harms that mirrored the themes and harms presented at the recent IAPP Privacy.Security.Risk 2023 conference: data collected and used without consent, loss of control over data, no explanation or recourse, and financial damage.

The strikes only brought changes to one part of the creative economy. The National Executive Director and Chief Negotiator at SAG-AFTRA noted that generative AI can be used in innovative and helpful ways in the entertainment industry, but that progress should be achieved by augmenting rather than replacing human creativity. A member of the Negotiating Committee for the WGA-West echoed this, reminding viewers that AI was a key issue in the strikes that just ended for the writers. Getting organizations on board with creative control through post-strike negotiations was an immediate solution for only part of the entertainment industry, but public policy and legislation is crucial to protect the rest of the creative economy as a whole.

Artists typically don't own their works. A digital designer, screenwriter and voice actor all reiterated that neither they nor their respective guilds and trade associations have ownership over scripts written, models designed or audio produced because most of their output is work made for hire. Employment lawyers and frequent contractors may be familiar with the term; a work made for hire refers to deliverables whose ownership belongs to the hiring party rather than the creator, giving the copyright of such work to the hiring party unless specifically and contractually agreed to otherwise. Through various negotiations, writers unions have given creators compensation and more ownership, but even this and the recent developments in copyright law that prevent AI-created art from being copyrighted do not mean creators whose art is fed into those AI tools have copyright protections or original ownership of their works.

It amounts to an unfair method of competition. Large language models used to train AI tools generally use publicly available data and user-inputted data. The former poses a problem for members of the creative economy, especially when publicly available data can include data that should not be public at all. Some speakers on the roundtable determined their written works were part of certain training data, and they had not provided their works to publicly available sources. Rather, they found the AI models had gathered their data using web scrapers and sites hosting pirated content.

Allowing users to experiment with generative AI to, for example, write a short story in the style of a famous author without that author’s consent, attribution to that author, or compensation for that author causes an immediate issue when that story later appears on Amazon as a direct competitor to the original author's works or as literature that may deceive consumers into thinking it was created by the original author.

As one speaker pointed out, this practice and subsequent displacement of human creators directly falls within the FTC’'s purview of enforcing against unfair and deceptive acts and practices, and as Commissioner Alvaro Bedoya previously noted, could amount to an unfair method of competition.

Emulative works create legal issues. Imagine a digital artist who works for an animation studio and signs a noncompete agreement. This is a legal agreement specifying that the employee cannot accept employment with that employer's direct competitors after the employment is over for some period of time. If this artist works for studio A and a contracting artist uses generative AI that has been trained on the digital artist's public portfolio to submit a new work to competing studio B, is the digital artist suddenly in hot water with studio B, even if she did not consent to her works being used and imitated? A voice actors trade group founder described a not-so-hypothetical plight that faces his industry. Client organizations buy the exclusive rights to the voice actor's specific recording and the "feel" and sound of the voice actor's voice, so synthetic voice outputs created by AI can lead to legal conflict when a voice actor records a commercial for Pepsi but finds their voice being used, via AI tools, by Coke. This burden currently falls to artists even though they have not consented to the process that has created this problem.

Deepfakes are still a problem. While the previous artists make their living creating art, models and actors make their livelihoods from their image and likeness. One issue both models and actors have faced is the development of 3D body scans. Background actors in movies and TV shows have been made to participate in 3D body scans without being told whether their digital likenesses would be used on screen, what the use of those body scans would be for, and if they would get compensated for current and future uses of those scans. Similarly, there is endless literature on celebrities and models being subjected to nonconsensual deepfakes as well.

A related issue arose for the modeling industry when brands began using AI-generated models and influencers for gigs, replacing not just the models themselves, but the surrounding team of stylists, photographers, and hair and makeup artists. Earlier this year, clothing company Levi’s announced that it would test AI-generated clothing models to "increase diversity." Within one week, the organization released a statement due to backlash regarding digital blackface and tone deafness, clarifying the program was not meant to replace but rather support real photoshoots. Models noted the problem still stood, and like authors, screenwriters, actors, artists and others in the creative economy, AI was directly competing with them for their jobs.

What do we want? And when do we want it?

Although each speaker represented a different facet of the creative economy – screenwriters, print editors, software creators, authors, voice actors, models, concept artists and musicians – it quickly became evident that as a result of sharing the same issues with the unregulated state of AI, they shared the same goals. One individual even stated that he "doesn’t hate AI" and understands how beneficial AI innovations can be. He shared that in his industry, AI has specifically helped in giving human-sounding voices to those using speech synthesizers. There was a general concurrence on wanting continued innovation in AI, but not at the cost of the livelihoods of those in creative fields. Specifically, these artists called for:

  • Opt-in consent obtained from creators before using their voice, likeness, performance, persona, works or other intellectual property to generate content.
  • Control over whether and how much of creators' works are used in training data.
  • Attribution when creators' works are used and compensation for that use. A few speakers mentioned payment per generation or output.
  • Education and awareness efforts, including labeling AI-generated content.
  • Transparency of what creative content is fed into AI models.
  • Algorithmic disgorgement when AI models generate content "in the style of" artists who have not consented to use of their works.
  • Enforceable legal protections for biometric data like voices and faceprints, for intellectual property created by artists, and against AI-generated content not currently covered by copyright law. There was additional support for third party verification of audios to determine if they are ethically sourced, and separately for the AI Labeling Act that was introduced in the Senate in July.

Some of these proposed solutions may seem familiar to those who have been keeping up with the WGA and SAG-AFTRA strikes and negotiations. The themes that came from this roundtable can best be summarized by Commissioner Slaughter's statement. "Technology is a tool to be used by humans. Humans are not and should not be used by technology."


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.