What can the power imbalance in the WGA strike teach us about AI regulation?
Logo and header design by Sara Remi Fields

What can the power imbalance in the WGA strike teach us about AI regulation?

A lightning fast take on the WGA Strike and AI, with special guest Yair Landau . Yair works at the intersection of business, tech, and entertainment - evolving digital media through storytelling. He was previously Vice-Chairman of Sony Pictures Entertainment, did the deal to bring Spider-Man to Sony, launching the modern superhero filmmaking era, and launched Sony Pictures Digital Entertainment bringing Hollywood Studios into the digital era, including Movielink, the first digital download service involving a majority of Hollywood Studios. Yair has produced Academy Award winning animation, having launched Sony Pictures Animation and Mass Animation, the first crowdsourced social media-based animation production studio. 

Let’s do this ⚡ These views are my own, not those of where I work.

The (shared) take 

The Writers Guild of America wrote a set of demands that are specific to AI, though the current strike is about a much broader set of issues that aim to protect writing as a sustainable career. Because of the negotiating leverage of the WGA in the entertainment industry, we have an opportunity to see a model for how both employees and employers can understand the implications of AI in the workplace. 

This WGA strike and the last one in 2007 both started at a moment of key technological change (first streaming and now generative AI). The deal that ended the last strike did not allow the WGA to achieve streaming platform data transparency in order to be able to continue tying compensation to performance. This costs writers dearly to this day, as at the time they were unable to negotiate a deal that took into account streaming platforms' emergence (now dominance). The WGA should learn from the last strike by focusing on the foundations of AI transparency from the beginning (even if it's not totally possible yet), to ensure that there is a platform of shared knowledge to negotiate from, even if that happens over time. 

The issues stated above are also intertwined with the evolution of copyright with respect to Generative AI. We should be rethinking the definitions of derivative work to understand usage with LLMs, and find ways of creating AI-generated work that does not violate copyright. These implications may actually become the key to leverage for the WGA, as the implications of being able to use copyrighted writing or likeness in concert with Generative AI will affect the broader set of employees in the entertainment industry – actors, animators, and anyone else who is part of the creation process. Let’s just hope that there is a model that comes out of this moment that is beneficial for all involved. 

More info 

  • The WGA strike started on May 2nd, 2023 and is expected to be a long haul for the writers. The WGA is fighting for the right to protect writing as a sustainable career, with demands that range from rules regarding writers' rooms, to streaming residuals, to regulating the use of generative AI. One piece at issue here is the process of writing with AI - when a writer is able to be handed AI-generated material, how AI-generated sources are covered under MBA, and who can dictate the process. Understandably, writers are concerned that their work will turn them into an extension of the machine. 
  • The last writer’s strike in 2007 came at the advent of streaming, and there was a focus on how to protect compensation for writers in this new world. However, at the time the WGA did not know the extent of future streaming platform dominance, and did not secure a deal that was designed around the new norms of new media. Instead, the deal focused on tying new media to old media performance, leaving writers with streaming residuals that are very different from what they would have had for breakout success in traditional media. 
  • Streaming platforms continue to keep streaming performance data close to their chest, and share that it is difficult to determine which shows drive underlying subscription revenue. This was a huge loss for the WGA that writers continue to suffer from to this day. 
  • What the WGA should have done in 2007 is build in demands for data transparency - as transparency is the foundation of any reasonable negotiation. Where there are large power imbalances, having information imbalance makes it difficult to have a shared platform to be able to negotiate from. 
  • The current demands from the WGA are focused on the ‘how’ of using AI in the production process – wanting writers to have full control over whether AI is used on the process. This is a difficult starting ground since studios do not yet know how they are going to use these new Generative AI tools as part of the production process. 
  • What the WGA should do, learning from the last strike, is build in transparency and notice from the beginning, even if we don’t yet know how this will be enforceable or possible yet. Building a way to continue to get more information into the agreements will allow the WGA to continue to respond as things change over time, given how much we don’t know yet. 
  • The leverage that WGA has within the entertainment industry will be important for creating a model for how to handle power imbalances regarding the use of generative AI in the workplace. This model will be critical for other employees and employers who are watching how the WGA uses this moment with care. 
  • The success of the WGA in this negotiation depends on how other parties in the entertainment industry, such as actors and directors, band together to support the writers. When thinking about how studios could be using copyrighted material that they own in concert with LLMs, we can see how there would be aligned incentives and concerns among creatives to ensure fair use in derivative work.
  • The studios are likely already thinking about how they will be able to leverage the generations of brilliant writing that exist within their catalogs to be able to create sequels, plays, and TV shows. If we are able to create technology that can bring the benefits of Generative AI while still respecting copyright law by paying for fair use, this will be a powerful tool for all media and entertainment companies looking to create ongoing value from their content libraries.

The Interview 

Keren: Yair - I am so excited to talk to you in this important moment for the entertainment industry. I want to discuss the Writer’s Guild Strike and implications for broader industries. This isn’t the first Writer’s Guild strike, and the most recent one back in 2007 happened around the advent of streaming. How does technology, whether streaming or AI, play into the long term career of writing? How does this technological moment intersect with the legal and contractual boundaries that have governed the entertainment industry? Why do you think the WGA and studios are so far apart on this issue? 

Yair: Sure. I think we can start with how a lot of the stuff related to AI and copyright is still TBD where there are several cases that are currently working their way up the legal system. The current official position of the Copyright Office is that AI generated work is not copyrightable. However, that is dependent on the submitting party acknowledging that it's AI generated work. It is a real fundamental weakness and we can really start there. Copyright law has to evolve in some meaningful manner. 

One of the best analogies that I’ve heard is comparing Generative AI to photography. Prior to photography, portraiture was a meaningful way of making a living for a lot of people. There are still various types of painting that still exist, and photography is an evolution of that and a different form that is a separate kind of creative, with a different set of copyright approaches. Generative AI is a quantum evolution tool like the introduction of the camera, and frankly, just like the introduction of the camera we are at the infancy of what this tool can really be used for, where now people have high resolution cameras in their pocket which we didn’t know at the introduction of the first camera 100 years ago. 

We are going to evolve from generative AI that is not great to AI producing quality work in a much shorter period of time. The tools will become more refined and have larger-scale impact very quickly. 

Keren:  Put yourself in the shoes of the Writer’s Guild. You see this seismic shift happening and you want to protect the career you have and it feels like it is being attacked from all angles. I’ve been digging more into this strike and there are a number of themes - streaming, contracts looking more like gig economy work, AI. Adam Conover who is one of the WG negotiators said “the points that really matter to writers are the points that protect writing as a career.” Given the analogy you just used, what do you think the WGA is most concerned about? Where do you think they will be successful and where do you think they won’t? 

Yair:  There is room to negotiate on the composition of writer’s rooms and the role of writers in the production process. There is certainly room to negotiate on gig economy versus commitments. It is going to be really hard to force minimum room sizes and minimum run of hiring but its not impossible. There are a lot of physical unions that have mandatory requirements to it is not impossible to have a run of series commitment for a certain number of writers on a show. It is not an irrational request. The challenge is that you’re now in an era where there have been mini rooms for a meaningful period of time, so it feels like its a clawback for the studios and huge resistance to that. There are obviously incredible efficiencies in what they done and the price of those efficiencies have been borne by the writers. 

Having a contractual engagement for a minimum number of writers on a show isn’t an unreasonable request, but its more about whether the WGA has leverage enough to force that. The minimum level of compensation associated with being a writer has been in place for a long period of time and I don’t think we’re going to head towards a gig economy. Obviously non-signatory productions get made, but major talent isn’t going to step in front of the camera on non-signatory productions. WGA leverage is not standalone - its all of the talent that honors the work of the writer and is going to honor this strike. Brett Goldstein is on strike both as a writer and as Roy Kent the role in Ted Lasso. Jason Sudeikis is not going to show up and shoot Ted Lasso and cross the picket line. The leverage they have is talent. While the WGA has been vulnerable for a long time, I think they're right to strike because they are in danger of slipping into a gig economy where being a writer is no different than being an Uber driver and the vulnerability is enormous. To the extent that they can protect against that, I think they're making a rational choice on behalf of their body of writers. 

Now, I think it is going to be a long time before AI can generate a human level of depth in writing, where you get the emotional resonance of a true character study from AI. It’s not just processing data - even though the algorithms will become more and more refined and the quality will improve, this is not chess. This is not about picking the right move - it will be a long time before it can reach the level of artistry of the writing we have even if its GPT 79. 

Keren: I agree with you that it will take some time. But I don't think it's actually about the current state of the technology. I think this more like the approach to streaming in 2007 where writers are trying to protect against a future state of technology as much as they can right now. 

Yair: That’s a good example, and I think they didn’t do a good job back then. There was a lack of vision in terms of where the world was going from a streaming standpoint. It was new media, and there wasn’t really an understanding of SVOD (Streaming Video on Demand) as a primary driver and what that would mean to windowing and to work and to broadcast and cable. The last writer’s agreement did not position them well for the streaming era, and a lot of people have written about this, but there is a real imbalance of information. 

That is the core part of the problem, and I don’t know whether or not there is enough leverage to drive transparency and tie compensation to performance. On streaming platforms a huge fundamental economic shift has taken place that I don’t think people adequately envisioned. Everyone has moved on from knowing the opening weekend box office numbers and overnight ratings where everyone had an understanding of performance and could be compensated based on that. Now you’re in an era where nobody knows performance and every major streaming platform quote its own obscure not quite real data. Even really established writers don’t know how their product is doing and are not being compensated based on how their product is doing. Until this transparency issue is addressed, and there is huge incentive from the streaming platforms site to not be transparent and they fought hard to sustain that. Now, every piece of talent, not just the writer’s guild, has an incentive to force the streaming platforms to share numbers and if that happens compensation can be tied to performance that addresses a lot of the issues and is more equitable. Before, you knew what a film made over the weekend and estimate what the talent made based on a negotiated definition. Now, you don’t see the revenues and there is no connection – streaming services say ‘hey, your individual show didn’t drive the subscription so its not actually tied to it on an individual show basis.’ That is true sometimes, but you can’t tell me that HBO didn’t sustain subscriptions because of Game of Thrones. 

Keren: Now, I want to take this analogy back to this moment. In the 2007 strike there are clearly some things that didn’t work out that well when it came to streaming. There are now things the WGA is trying to rectify around data transparency. When it comes to AI, we have a new field and a new set of constraints. Ultimately, this is about which party is in control of when to use AI. The twitter thread from WGA about AI talks about AI not being used as source materials to create MBA covered writing and that writers shouldn’t be forced to adapt AI generated writing. This is really about the idea of being handed down something that is generated by a machine and being told that you have to adapt this. I went to this breakfast last week where there were journalists and writers there. Everyone there is testing out ChatGPT and I’m sure screenwriters are doing the same. So what is the issue really about? Is is about control? Is it about copyright? What is the central issue? 

Yair: It's certainly partially about control. And meaningfully about compensation. And continued employment. I frankly think that those suggestions on the WGA’s part are naive. You are not going to be able to dictate the development process at a studio as part of the deal here. If you are saying that you won’t do rewrites on AI generated material, how are you even going to know? If I give you a script to rewrite and put someone’s name on it, how are you going to know that it was AI generated? I think its a tough position to defend. I appreciate that writers are saying ‘look, I don’t want to be attached to a machine. I don’t want to be a human extension of machine products.’ But I think that attempts to restrict usage of AI are going to be really hard to fight because nobody knows enough about how they are going to end up using it. And from the studio side, its impossible to agree to because I can’t constrain my future operations on that level because I don’t even know how I’m going to use it. That is part of what happened in the last negotiation as it relates to new media - the studios said, ‘we don't know enough.’ 

There’s an imbalance of data, knowledge, and transparency there. This problem also clearly exists on AI usage on both sides. A writer could use AI to generate the work and put their name on it, or start with an AI-based draft and refine it. As one of my English professors said in college, “There is no writing. There is only rewriting.” That is the truth of most scripts already, so you are not going to be able to police the writer’s use of AI because you don’t have the mechanism to detect and enforce it. The approach has to be completely different when you assume that everybody is using AI for everything, and create a fair process of compensation associated with that. This is not the approach the WGA is taking here by trying to legislate what you can and can’t do. 

Keren: Isn’t it interesting how similar this is to the data transparency issue from streaming? I hear you and what you are saying makes sense, but there is a part of me that wants to believe there is a set of demands that are reasonable from the WGA that can act as a model for other industries to follow. For example as a product leader, I need to make sure that product managers are using Generative AI responsibly - making sure we’re thinking about users and customers first. I see this moment as a potential model for other industries where there could be reasonable regulation or agreements. What would be a set of demands that could help us learn that might be more generalizable? 

Yair: I don't believe it's about regulation, I think its about transparency. Each party should be required to notify you when one uses AI. Maybe its a watermarking equivalent, tying this back to copyright, there’s a lot of digital watermarking that’s tied to piracy prevention. Is it 100 percent effective? Absolutely not. Is it marginally effective? Yes, and I think that there has to be some attempt to track against usage. Now that everyone will use different forms of AI on both sides of the process, we have to figure out what reasonable compensation and expectations are. If I ask you on a monthly rate to generate x number of scripts and you can be remarkably more productive because your first draft is AI generated, there should be a different compensation structure for that. 

There is not enough information here to regulate, and I don’t think the studios are providing any information about how they currently use AI. I have friends who are developing feature films in rooms where one of the participants is ChatGPT - where it isn’t the primary writer but they are treating it like another writer in the room that gives input - running things by it and refining off of that. That is a reasonable use case, and there are likely a lot of logical use cases for generative AI. Trying to constrain those use cases up front isn’t a reasonable objective because its unenforceable. 

Keren: What I am starting to understand is that just like how with 2007 and streaming the WGA should have pre-built in data transparency even if they didn’t know how it was going to work. We are in the same place now. We don’t know how transparency around AI is going to work, but if you can build in some of those protections, maybe that can create a baseline that helps protect compensation structure in the future. 

Yair:  In the legal system, when there are huge power imbalances, there are notice requirements. There are requirements about, if it is a piece of AI generated work that they are asking you to modify, its not unreasonable to tell you what AI model was used and what prompts were used to generate it. If you had a creative meeting they would say - here is the script, we’d like to make this character this and add these settings and punch it up here. Would that constrain the studio? Yes. And I’m going to go back to some level of sharing information is what is missing here. It is missing from the streaming economics and the AI definitions. Whether or not the studios and streaming platforms are willing to cede that level of power is what is at essence here. It is about compensation, but it's really about power in the process. Attempting to regulate that power in the process without knowing what you are regulating doesn’t make a lot of sense. 

Asking for information so I can understand the power in the process is a harder thing to say ‘hell no’ to. It is easier to say ‘hell, no. I’m not going to let you regulate my business. Who the hell do you think you are?’ It is harder to say that I won’t tell you about how AI is being used - that seems like a less reasonable position for the studios and therefore less defensible. What the WGA should be doing is asking for more information so they can have a reasonable negotiation. 

Keren: Taking this broader, I think this approach can be applicable to other industries or even other aspects of media and entertainment such as journalism. If we start by acknowledging that there is a big power imbalance and we need to put some constraints on that, it will make a big difference. If a studio is using AI versus a single writer using AI it has a very different level of impact. I think notice makes sense and can be applied to other industries. 

Yair: You have to start with acknowledging that this is reality and we all have to deal with it. We need a set of tools to deal with it as opposed to a set of regulations. Another analogy here is science publications versus other type of publications. You have to cite sources and reference actual underlying materials to be able to publish. We might need to move to a world where we have to cite underlying sources more frequently so that we don’t end up in an era of mass plagiarism. 

How do you know that the data wasn’t trained on an illegally procured set of information? If I write a new Harry Potter fan fiction that is trained on the core novels, I will have better romance novels to post on the Leaky Cauldron than others. That is definitely a violation of copyright. If I give the entire Beatles catalog to AI, it can generate a love song that will sound an awful lot like a Beatles song. The protection of the initial data that comes in is going to become really tricky and is clearly a copyright violation. This is obviously a broader copyright issue than what the WGA would take on. What was the algorithm trained on? How much of that information was in copyright violation? 

Keren: This is definitely a much bigger issue and because LLMs are already trained on the entire corpus of the internet, it is already happening. Something's got to give. 

Yair: The WGA obviously doesn’t have the leverage to change that. Talking about a super established writer, what if I bought an Aaron Sorkin script and I control that copyright because I paid for it as a work for hire and then I feed that into an LLM that I’m using? I didn’t negotiate for that when I bought the script. But now in the future maybe that will be part of the work for hire definition that I’m allowed to feed my models and therefore you are not only granting me the copyright for what you wrote, you’re granting me the right to try to mimic your abilities. This isn’t in the WGA’s requests, but what goes into the definition of work for hire and reasonable use? It becomes really tricky. What is my right to do derivative work? This is a totally new form of derivative work that is certainly not contemplated in existing agreements. If you wrote a script, in most cases you would be obligated to write the sequel. But if I own derivative works you don’t have the right to write the stage play or the TV series unless you negotiated that separately. So if I can feed it to my LLM, then I own the training model for every single writer I have ever hired.

Keren: This would be really powerful leverage for studios if that is how the model works in the future. 

Yair: Well, I can certainly see a use case of that happening right now, right? If I were working in a studio I would be looking to do that right now. If I am a studio, I already have access to generations of brilliant writing that is owned by me as a copyright holder. They could generate Rocky 9 using Sylvester Stallone’s original script which won best screenplay. 

Keren: This is interesting, and potentially a new model for how copyright might be approached for LLMs with respect to the use of material under copyright and derivative works. Before we wrap up this amazing discussion, is there anything else that you wanted to add? 

Yair: I don’t believe we’re going to end up in a future where there’s no role for human refinement of work. But there’s probably a lot of product out there that doesn’t require that level of engagement so its an interesting dynamic. We’re going to end up in some form of balanced dynamic across multiple professions where people are interacting with AI and that will adversely impact some jobs. In the same way that tools historically impacted forms of jobs – artisanal skills that were done away with due to automation. 

Keren: We did start this conversation talking about painting and photography. I think that is a good way to end because there are new professions that will be created. There will be ways that creatives will learn how to augment work via AI in the same way that traditional marketers had to learn how to use social media and a new set of winners emerged. I will end this by saying that I hope that we use this moment for something good, to create a model that both employees and employers can use to understand the impact of AI on their jobs. 

Yair: It is too soon to tell as its early days, but let’s remember that it is harder right now on the striking writers than on the streaming platforms. There is a question of whether or not this starts to impact them economically to the point where they feel like they must negotiate some of these things. It is not clear today. We survived the stretch of covid where there was no new content for a long time. People didn’t stop their Netflix subscriptions just because there wasn’t much fresh content. The question here as I said in the beginning is how the other entities feel. 

Keren: Yes, it is certainly different than the last strike in that respect. The way that users make decisions around content is different than it was due to the way that streaming platforms are priced and bundled. 

Yair: The question here is whether the other entities feel the comparable need around AI and support that? How much is the talent leverage we discussed before going to play into this? That is where incentives end up lining up – we have to see if directors and actors and other people in the industry align in their thinking around AI. If an actor’s performance can be fed into AI and it can create some kind of digital replica, will I need to hire you as an actor for the show? We’ll just have to see. 

Keren: That is a great point. Yair, this has been such a pleasure as always to spend time in discussion with you. Thank you again for your time! 

Aaron Jackson

Senior Set Designer - ADG local 800

11mo

✊ 😁 💚 🖤 💜 💛 💙 ❤️ 🧡 #videotape #DVD #newmedia #AI

Gary Selvaggio

Creative Exec Writer Producer VP Development

1y

Unlike almost anything that's come before it except the Black Plague, AI has the serious potential to put out of work or under-employ billions. That is everyone's big concern and it colors everyone's reaction to the AI tsunami that's on its way. The WGA will get through this first anxious phase but already Big Money is waiting for the opportunity to change copyright laws to eliminate the term "human" and assign IP rights to the firms that control the own AI "scribes".

Michael Spencer

A.I. Writer, researcher and curator - full-time Newsletter publication manager.

1y

It's interesting to me how morale is going down at Microsoft just as they are hyping A.I. the most. Do you think that's a coincidence? Pushing OpenAI on the world perhaps wasn't the best strategic innovation or trying to hurt Sony with a huge Activision deal. With the great tech layoffs that have started it would nice to have more transparency with what's actually going on at Microsoft and LinkedIn.

Norian Caporale-Berkowitz, Ph.D.

Executive Coach | Psychology Professor | Early at Coursera | Helping CEOs & Leaders Unlock the Next Stage of Growth

1y

It will be interesting to see which industries fight against AI , and which ones incorporate it!

Shelby Layne

Strategist | Innovator | Communicator | Futurist | Executive Thought Partner | Previously: Goldman Sachs, Yale, Barnard

1y

Thank you for this conversation- you have me thinking about what it means (or doesn’t mean) for there to be public acknowledgment of when AI is used for a script. Is there taboo associated with it? If so, what does that say?

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics