Guest Columns Top Stories
4 mins read

Media Moments Report 2023: Artificial Intelligence

In this extract from Media Moments 2023, Chris Sutcliffe of Media Voices dissects the key events that occurred between the intersection of AI and media over the past twelve months. TL;DR: It was a busy year.

Generative AI has gone populist this year, with consumer-grade GenAI tools now widely available. While it offers new opportunities – editorial and commercial – for media companies, its widespread adoption has also led to ethical and practical dilemmas.

Generative AI went truly mainstream this year. While media companies have been using artificial intelligence to varying degrees for years, the widespread availability of generative AI tools has opened new avenues of opportunity for publishers. Whether it’s rewriting copy for social, using it for research, or even generating entire articles, publishers now have a wider variety of use-cases for AI than ever before.

But new concerns – legal and ethical – have arisen with those new opportunities. For one thing, the large language models (LLMs) that underpin a number of GenAI tools have been and are being trained on publishers’ and the pudiablic’s content – often without permission. That has led to some publishers being in the uncomfortable position of having to block scraping of their sites for training data while also seeking to use LLM-trained tools themselves.

On November 9th, for example, News Corp and IAC called out GenAI companies for scraping their content for training and commercial purposes in their earnings call, making that contention official. In an echo of publishers’ relationship with search and social platforms, however, it seems that relationship is less contentious when money changes hands: The AP negotiated a deal with OpenAI in July, under the terms of which OpenAI is paying to licence part of AP’s text archive to train its models.

Some publishers are mitigating that issue by using AI tools that have been purely trained on their own data. In some cases that is being used to develop chatbots and search tools that help navigation, while in other cases it is evolving publishers’ existing use of AI to personalise recommendations and flexible paywalls.

Beyond training, media companies are having to reckon with the extent to which they can use AI in their consuming-facing products. It’s one thing to use it behind the scenes, but quite another to use it as a reporting tool. Issues of accuracy, trust, and differentiation have all reared their heads in the rush to use GenAI in reporting this year.

G/O Media, for example, was called out for its use of entirely AI-generated ads in July. The company published four stories across its portfolio of titles that contained inaccuracies and, crucially, were wholly AI-generated. Despite opprobrium from editors and criticism for the lack of transparency related to the articles Merrill Brown, G/O’s editorial director, told Vox “it is absolutely a thing we want to do more of.” That was widely – and accurately – seen as a euphemism for cutting more journalists.

Many publishers have attempted to get out ahead of issues related to accuracy and transparency by publishing ethics codes around how they will use GenAI in reporting, which London School of Economics’ professor Charlie Beckett has stated is vital for the long-term sustainability of its use.

The Guardian published a statement in June on what it would and would not use GenAI for. It noted that, in addition to only using it where appropriate, it would do so in a transparent way that notifies the reader that GenAI was used. In November The Telegraph issued an internal memo in which it stated that it aimed to be “permissive” for the use of AI for efficiency purposes, but that “the paper has forbidden staff from incorporating AI-generated text into copy except in limited circumstances”. 

Beyond news publishing, other media companies are looking to use AI to grow their audiences. In September Spotify announced it would be using an OpenAI tool to translate its suite of podcasts into other languages, with the promise that the personalities and intonations of its hosts would be maintained. 

Given the relative speed with which the tech has been developed and deployed, copyright law regarding GenAI outputs are still being determined. In August a judge in the US ruled that creative content made using GenerativeAI does not enjoy copyright protection, though there will undoubtedly be further developments in that space.

For news publishers, perhaps the biggest unsettled issue is the extent to which it will add more noise to the online ecosystem. As mis- and disinformation continue to run riot online, the increasing sophistication of GenAI images and video will exacerbate the scale of that problem. That will require internal changes and the development of new skills to counter that disinformation.

However, it will also provide legitimate news publishers with a big opportunity. There is a chance here for publishers to reveal how the sausage is made when it comes to its journalism, providing provenance of how its content was created and reaping the associated trust. That could in turn lead to a marketing opportunity, in which ‘human-made journalism’ is a mark of quality worthy of paying for.

AI in the newsroom: Opportunities, regulation and risk

In this episode Media Voices take a big picture look at AI, based off the recent Mx3 AI conference. In this first part Media Voices set the scene for AI and its use in publishing, as experts tell us how to prepare for internal and external changes to media businesses. Listen here, or search ‘Media Voices’ on your podcast app of choice.

Media Makers Meet – Mx3 is proud to be the media partner for Media Moments 2023, the report written by Media Voices which analyses and tears down the major media events of the past year. The report is free to download and is available here.