On 7 December, Media Voices and we will be hosting the Mx3 AI gathering in Hoxton, London. A key event partner is FT Strategies who will be speaking about implementing AI into media operations. In this pre-event interview, Aliya Itzkowitz, Manager, and Sam Gould, Consultant, discuss the AI landscape further. P.S. A few tickets for Hoxton still remain.
FT Strategies, the consultancy from the Financial Times, works with organisations globally to understand and respond to emerging trends – including Artificial Intelligence. In this interview with Media Makers Meet (Mx3), FT Strategies shares the methodologies used for understanding the opportunities and risks presented by this fast-moving technology and the practical ways that publishers (including the Financial Times) are finding success through applying AI.
What are the key current trends you are seeing with AI in Media?
Aliya: The big change this year has been the use of Generative AI in the journalistic process. The capacity of the tech took many people off guard and now we are seeing some newsrooms embrace this as part of their process – whether it’s smaller parts of the process such as Headline Generation or even the creation of automated news reports on topics such as sports or stock market performance.
Sam: We’re currently somewhere near the peak of the Generative AI hype cycle and, like with hype cycles on previous forms of AI or in different industries, the conversation started around education, use cases and business strategy. But this time we’re seeing it move more quickly into implementation and experiments, possibly because of the technically accessible nature of some of these new tools.
To date, which applications of AI have inspired you the most within media?
Aliya: I am really fascinated by the next frontier of automatic language translation. I feel this has the potential to be truly disruptive in the context of news. AI makes it possible for a publisher that produces in a smaller world language, such as Hungarian, to reach audiences beyond their home market by directly producing content in other languages. On the other hand, this could also create a world in which all publishers compete globally for an audience, where an English TV news anchor can have their reporting translated in real-time into other leading world languages such as Spanish, Mandarin and Hindi.
Sam: Like Aliya, the ‘language’ part of ‘large language models’ is the bit which fascinates me. I think the reason that Generative AI exploded on the media scene is primarily due to the uncanny levels of language ‘understanding’ that these models are now exhibiting. Some of the most practical and powerful ways of deploying Generative AI are as a ‘natural language wrapper’ around more established (and controllable) data and analytics technologies – for example using AI so that a data scientist can talk to their database, or so that a reader can ask questions directly to the content they are consuming. But we’re also starting to see the emergence of multimodal AI that seamlessly converts between text and other formats, which is a relatively unexplored area especially when viewed as an ‘image or video interpretation tool’ rather than just for content generation.
What are the chief concerns/fears media groups have with AI in general?
Sam: We’re seeing fears of ‘being left behind’ – organisations know that there is a commercial value to be gained from AI, but it can be difficult to know where to start or how to move quickly enough. That’s part of the reason why we love helping media companies to cut through the noise and get started on their AI journeys. For media organisations, doing AI responsibly should be a central tenet. For example, reliability of information is especially important given that this is the currency of most news organisations. Magnification of bias and unreliable information could be turbocharged by AI algorithms. And IP ownership needs to be carefully managed. If users increasingly go to Gen AI models and alternative products to answer their questions – where does that leave the publisher?
What are the key challenges media groups face when trying to implement AI across their operations?
Aliya: Trying to create systems and approval processes without stifling innovation. Also, the technology itself is changing so rapidly that it is difficult to keep pace. As with previous forms of digital transformation, innovation can sometimes occur in pockets and teams might not necessarily get to hear about it due to organisational silos – this can lead to duplication of work or teams not being kept in the loop about potential changes to their workflows.
A general consensus is that AI won’t replace media jobs, but will augment existing roles. What’s your view?
Aliya: I saw someone say recently, “Your job will not be replaced by AI. Your job will be replaced by someone who understands AI.” Using AI creates a whole new set of challenges that we need people to solve. That said, a concern I have is the education process for people who are new in their careers. Many of the ‘easier’ tasks that are typically now being given to Gen AI would have been given to an intern or trainee in the past. So the question for me is, how will we make sure that junior talent is able to come up the learning curve in an AI world?
Sam: It’s difficult to predict what will happen to jobs and corporate structures, but what I can say is that the research area of ‘autonomous agents’ (tools which can receive a complex task and attempt to break it down and work through the smaller chunks) is a very exciting field! An example of this would be a tool like ChatDev which is given a coding task, like “make me a website”, and works through it step-by-step.
Publishers like The Washington Post and Trusted Media Brands (Reader’s Digest etc) have set up task forces to harness AI and better understand its value. What advice would you give to publishers, both large and small, about this – should they employ a new Director (Head of AI) or simply set up a task force? Or is there another way?
Sam: It’s great to have experimental teams embedded at the department level – this is often where the best ideas come from – as well as a higher-level board that is setting a clear vision and strategy. But I wouldn’t necessarily reinvent the wheel for AI – lots of publishers will already have teams who are working with, or at least thinking about AI! Generative AI (and future, unseen iterations of AI) presents new opportunities and risks, so it’s important to have people responsible for looking at these, such as a cross-functional discussion and experimentation group, and a dedicated risk and governance panel.
We’ve seen a lot of controversy with AI-generated content – such as CNET earlier this year or this tweet from a staffer at Gizmodo. However, AI for real estate, financial or factual sports stories seems to be an appropriate use of the technology. What are your views on how AI should or could be used within editorial content?
Aliya: You’re correct that some types of stories, typically those following a relatively structured template, lend themselves more to AI and automation. But that’s been happening for years already. I used to be a journalist so I’m passionate about this question! I tend to agree with the FT’s editor-in-chief that quality journalism will still be created by humans.
I think AI can be used to augment both the creative process and the experience of the reader but it’s essential for a human to remain involved in the process. What I mean by that is that if you want to truly add something to the news agenda it usually comes in the form of a scoop or some investigative work. Now, there are also AI tools that can help you with that! For example, I used to work at Dataminr, a tool that helps journalists discover breaking news faster. It still comes back to the same question, reporting on what happened is no longer enough. You need a unique angle for a story and that is usually something that a human needs to come up with.
Beyond high-quality reporting, publishers should figure out what brings readers to their platforms and aim to meet these user needs and to build direct relationships with them – for example using First Party Data and the individual brands of their journalists.
The News/Media Alliance – representing over 2,000 publishers – and 26 other trade bodies, have issued a set of global AI principles which amongst other things “outline the need for GAI developers to obtain explicit permission for use of publishers’ intellectual property, and publishers should have the right to negotiate for fair compensation for use of their IP by these developers.” What are your thoughts on these legislative initiatives?
Sam: Logically, it’s probably a good thing for the industry that trade bodies are coordinating on the risks and opportunities around AI [Ed: We will also have a discussion about this at Mx3 AI]. These global AI principles are a good articulation of the AI ethics which have become relatively standardised in other industries prior to the focus on ‘Generative AI in Media’ and are important for doing AI responsibly, but also now include the additional IP considerations that have become relevant to our industry.
Most of the (ongoing) discussions on IP do not seem to propose anything outside of the norm – copyright law is already a thing and, as the FT’s Chief Commercial Officer Jon Slade says, if our content is used then there should be a payment. It’s worth noting that the inclusion of quality journalism in these models could reduce the risk of misinformation, but this doesn’t mean that there shouldn’t be compensation or licensing.
You can hear Aliya and Sam speak next month at our Mx3AI one day event in Hoxton, London. Only a few tickets still remain.