Lexie Kirkconnell-Kawana is the Chief Executive of British press regulator Impress whose membership consists of progressive independent publishers including The Canary, The Scottish Beacon, QueerAF, and others. In this exclusive interview, Lexie outlines the AI dilemma facing publishers in light of NYT’s lawsuit against OpenAI. TL;DR: It’s more Eton Mess than crème brûlée.
Under the swaying palm trees of Cascais at last year’s FIPP World Media Congress, Impress’s Chief Executive, Lexie Kirkconnell-Kawana gave an impassioned speech arguing that publishers who think that High Court judges will halt the AI juggernaut barreling towards them could be misguided. And then some.
“If litigation comes to court, judgements may not radically change either how the businesses operate, or how the tech functions. There may be some credible defences that these companies can deploy. So any media company banking on early legal intervention to protect their copyright might be disappointed.”
As a trained lawyer, Lexie’s words carry weight, but clearly her advice fell on deaf ears to the doyens of 620 Eighth Avenue, Manhattan – armed no doubt with a formidable war chest, The New York Times duly issued a lawsuit against OpenAI in December. The claim? OpenAI used millions of articles from The New York Times to train chatbots that now compete with it.
For publishers of a lower tier than The New York Times, namely most of us, the question then rests on whether the lawsuit stands a cat’s chance in hell of succeeding, or whether it’s a fig to legions of NYT’s shareholders that the company is doing something, anything, to arrest the tidal wave of uncertainty washing over it.
We reached out to Lexie – in advance of her forthcoming appearance at Mx3 Barcelona (March 12/13th, a few tickets left) – to get her latest thoughts on artificial intelligence, and we started by asking her about the implications of NYT’s lawsuit…
“We already have some early indications from the US courts on how they intend to deal with claims from creative industries against AI companies on the grounds of copyright.” The bottom line? “U.S. district court rulings have consistently shown that claims based on derivative works, where the outputs are not substantially similar to the original, are unlikely to succeed.“
As it stands creators don’t have recourse in copyright law where their publications are used to train these systems (and claiming value for fair use).Lexie Kirkconnell-Kawana, Chief Executive, Impress
She adds, “Now the NYT lawsuit is slightly different in that they argue they have detected ‘verbatim excerpts’ which goes towards demonstrating substantially similar outputs which that court might be sympathetic to.
“However, OpenAI argues these are uncommon bugs addressable in development, meaning a court’s finding of a technical copyright breach here may not resolve broader issues around the use of freely available content in AI training.”
Paywalls as a survival strategy
Whilst Lexie’s words might be as soothing as a choir of screeching brakes, her next thoughts are a crash test dummy pummelling into a brick wall, “If current legal trends persist, and without formal or self-regulation, the outcome may be that all content creators – including media and news publishers – will need to safeguard their work behind paywalls as well as negotiate licensing agreements with AI firms.”
It will basically signal the end of free (at the point of use) content.Lexie Kirkconnell-Kawana, Chief Executive, Impress
If that wasn’t enough, Lexie continues, “I can see a whole new market emerge of original content being created explicitly (and just for) AI companies themselves, based on their own use needs, in the ways media had to pivot with the introduction of web browsing, and then social.”
This, of course, begs the question whether media companies can simply ‘opt out’ through regulatory vehicles. Lexie exhibits world weary cynicism at the mere thought, “I get ‘regulatory deja vu’ when technologists say ‘just opt out’; it was a standard PR line in the 2010s to hear social media companies say ‘just opt out’ while strategically amassing global user bases in their closed environment that made participation unavoidable, and who are now facing regulatory interventions the world over.
My view is that robust regulation is now one of the only ways to redress the existing model or prevent further degradation of news/media product.Lexie Kirkconnell-Kawana, Chief Executive, Impress
Cooperation, and more cooperation…
At Cascais, Lexie held the view that cooperation rather than litigation is the optimal route forward for both publishers and AI tech vendors alike. It’s an approach favoured by German media giant Axel Springer who earlier in December announced a ‘first of its kind’ deal with OpenAI giving it permission to use published content to train its LLMs.
However, Lexie feels that these larger publishers wield undue influence, “What Axel Springer and NYT do in the months and years ahead, will determine what every small local new publisher around the world will be able to do. Unfortunately, I see the perverse aspects of media incumbency and competition preventing cooperation and leadership at a wider scale.”
She adds, “It might be utopian, but there is still an opportunity for a ‘Little Ships of Dunkirk’ strategy – where news organisations with content and audiences deemed too small or irrelevant come together and are given a seat at the bargaining table – where they can work with larger media and AI companies to enable a better information economy for all.”
“The 200+ news brands that are regulated by Impress are incredibly unique with different business structures, strategies, audiences and worldviews, and individually may not seem relevant to the AI sector. Yet bound together by a collective commitment to ethics, they are vital stakeholders. It’s about the larger players recognising this collective impact and making space for and supporting it.
I’d love to see AI companies come to the table and do strategic work with the industry so we are more certain about the road ahead.Lexie Kirkconnell-Kawana, Chief Executive, Impress
With her membership firmly in mind, Lexie remains cautious about the immediate future, “It’s been difficult to provide advice when it’s not clear how these issues are going to work themselves through. I know publishers are concerned about this, and we are involved with other industry groups looking at what might be the best approaches to AI.”
“Top of mind issues now are about use. Most newsrooms are using AI in some way, whether its administrative, systemic or generative. So a lot of questions surround what tools are available, how we should be overseeing their use, on the ethical implications of the use of generated images, etc.”
However, she concludes with a parting shot aimed at reminding tech companies of the importance of media to cultural and societal well being, “Media and news, particularly original reporting, serves a vital democratic and public interest function; we need a pluralistic diverse sector with providers of all shapes and sizes delivering high-quality ethical content to their audiences to help make sense of the world. Reconciling this with a functioning business model is the number one existential challenge for the industry and has occupied our imaginations for the past 25 years.”
Ominously, she adds, “To date, I haven’t seen any of these AI companies show competency in the purpose and function of media (particularly its democratic value), neither the impact their models have on it and the public welfare implications, or put forward solutions that will prevent us careering towards a closed monopolistic information system.”
Plus ça change…
Hear Lexie speak at Mx3 Barcelona on March 12th/13th. Featuring a world class line up of international speakers, the event focuses 100% on innovation in the media sector. Tickets can be purchased here.