Digital Innovation
4 mins read

AI Report: “Any media company banking on legal intervention to protect copyright might be disappointed”

In this excerpt from our latest report entitled “How Technology Especially AI & Web 3.0 Will Shape The Future of Media“, key presenters at FIPP Congress 2023 share their views on AI and Web 3.0. A key takeaway is that publishers hoping to protect their copyright and intellectual property might be disappointed. Here’s why…

One of the most important elements of FIPP Congress is the way that media companies share best practice in how they have begun to experiment with technology. And the same is true for AI. At Cascais in June, several publishing companies presented case studies of how ChatGPT or older forms of AI had already created new opportunities for them and in some instances transformed their business.

However, there are also threats – a key aspect is the legal conundrum of AI. In essence, should publishers be recompensated for the scraping and use of their content to train large language models belonging to other companies (here’s looking at you Chat GPT)? According to The News/Media Alliance, the U.S. trade body representing over U.S. 2,000 publishers, the answer is definitively yes.

Emerging technologies such as AI must respect publishers’ intellectual property (IP), brands, reader relationships, and investments made in creating quality journalistic and creative content.

Publishers must be fairly compensated for the tremendous value their content contributes to the development of generative AI technology. It’s a simple exchange of value.

Danielle Coffey, Vice President and General Counsel, News/Media Alliance

However, it might not be as straightforward as publishers hope – it could have more bends than a plumber’s toolbox.

Several Years

At Congress 2023 Lexie Kirkconnell-Kawana, Chief Executive, Impress gave her take on the legal background to AI and cautioned that it might take several years before government and pan-national bodies like the EU were able to offer legal frameworks for the governance of AI.

She began though by stressing that it was probably right that legislators moved slowly to begin with. 

“So, first of all, I’d like to caution against reactionary approaches. I think regulators, governments around the world are starting to see this issue through this hype lens. We need to just hit pause and understand what are the issues here.

“Now, creating new rules is always a challenge. It’s a long process. It requires lots of consultation, and sometimes we throw the baby out with the bath water when we are looking at how we best address the actual harms that have emerged as a result of whatever is new or novel, And I think harm is a really good starting point here. So how do we understand the functions of this technology and what harm they might be causing? 

“If we look at generative AI specifically through that functional lens, what is it doing?

“One, it’s scraping huge amounts of content and information available on open source. It runs machine learning to create new images, audio, visual text, etcetera. And then it’s storing compressed publications for training as well. Finally, that recombination process is where it’s deriving those new publications with a little bit of added help from the conditioning through text prompts. 

“And so if we look at those functions, the scraping, the storing, the recombination and the conditioning, that’s where we should look at what particular harms are occurring. And, there are various interests at play here, obviously for publishers. 

U.S. Civil Litigation

“So I wanna talk a little bit now about how lawyers and regulators are approaching this harm. So far in the US civil litigation has naturally begun on the issue of sourcing and scraping and the storage of data. The argument is that this tech violates copyright licence, particularly attribution storage and use, that it violates DMCA requirements, privacy law, passing off requirements, and that all of these violations are unjustly enriching the tech companies that have benefitted from them.

 But it’s really difficult to predict how the courts are going to decide on these issues. Also these actions won’t come to fruition for many months or even years. And if litigation comes to court, judgements may not radically change either A, how the businesses operate, or B, how the tech functions.

Lexie Kirkconnell-Kawana, Chief Executive, Impress

“And it’s also worth noting that despite these claims being launched, they may not be successful for the claimants.

“So any media company banking on early legal intervention to protect their copyright might be disappointed. Ultimately companies will have to adopt a wait and see approach, and may be best served by working together to engage with tech companies rather than hoping for legal action that forces them to offer attribution.”

To watch Lexie’s presentation on the legal & ethical conundrums surrounding AI, please access here.

To download the latest Mx3 Leadership report, AI, Technology and the Media – updates from the FIPP World Media Congress 2023, please click here.