Digital Publishing
6 mins read

How does the BBC use AI?

As a publicly funded national media organisation, the British Broadcasting Corporation (BBC) bears more responsibility than most for ensuring that its use of artificial intelligence errs on the side of caution and upholds ethical, moral and accurate reporting standards. This feature was originally published on the Bright Sites newsletter and is re-published with kind permission.

Laura Ellis is Head of Technology Forecasting at the BBC. She has worked on news teams in radio, TV, and online and established the BBC’s first end-to-end digital newsroom. In her current role she focuses on ensuring the BBC is best placed to take advantage of emerging technology.

How does the BBC use AI?

We’ve been using AI to do a number of things for years—translation, transcription, and object recognition via computer vision for example – exactly the sorts of things you might expect from a broadcaster. We have a programme called Springwatch, for example, which is part of our series of ‘Watches’ about nature and AI would spot a bird or an animal, having been trained on the sort of wildlife to expect. It would save producers having to go through hours of footage to find where the duck comes into shot. That’s really clever because it can also distinguish between species and between individual animals as well. 

And then, of course, when generative AI arrived, we started to look at how that might change the landscape.

How did generative AI change the landscape?

So the first thing we realised is that there were quite a lot of additional risks. There are three key problems we have to solve, I think, before moving forward comfortably with generative AI.

The first is that because it’s much more democratised, everybody could suddenly use it. You can speak to machines in your language. That meant that there is a danger that people could put data from the company into a system and if you don’t have the right safeguards, that data can go anywhere. You don’t know where it’s going to wash up so you have to be very careful. 

The second worry was that we don’t know how to make these models work without ‘hallucinating’ or coming up with things that are not true. They’ve been designed to be pleasing and to give you an answer. There isn’t the reasoning in there that says, ‘Oh, I don’t know that; I’ll say I don’t know it.’ It can just come up with things which we as we know can be harmful, damaging, and unhelpful, so that’s a real issue.

And then there’s the third issue, which is common to the whole of the media industry and beyond, which is to what extent are we comfortable with the fact—whether that’s legally or ethically—that these models have been trained on vast amounts of data, which in many cases is copyright data.

How difficult is it to police an AI policy at an organisation as large as the BBC?

We’re used to having editorial guidelines in BBC that everyone is expected to adhere to. So we do have a history of knowing that there are specific things that we have agreed editorially that we will and won’t do.

Now, we have moved into a new era where we’re looking at guidelines which have a lot more technology in them or a lot more references to technology. So in a way, it’s been quite interesting, adding to and updating those. I think it is also really important to supplement those with human training. I did a course this morning as part of the Co-Pilot series that we’re running, for example. 

It’s also really important, I think, for people to be able to ask questions, to say, “Well, hang on a minute, how does that work? Well, how would I access that? Are you sure that we’ve made the right decision on that?”

We also disclose to audiences what we’ve done. It’s very important to say to our audiences that we have used AI in the creation of this particular piece of content and to convey that information in a way that’s not intrusive but is instructive and useful. That’s a really big challenge.

Do you work with other media organisations when it comes to AI?

It used to be the case that you’d have a lot of rivalry between organisations. You probably wouldn’t share much about what you were doing but I think generative AI hit so hard and fast that we’ve wanted to share findings, and I think that’s been a really positive outcome. There are some people who are doing really impressive things. 

It’s similar to the way that we’ve collaborated over tackling disinformation. So again, that problem is too big to be something that you use as a distinguisher in the competitive space.

What advice would you give to publishers just starting down the road of AI?

I think the first thing we need to do is to listen. It’s important to have conversations with colleagues and maintain the human element in journalism. The people that are going to use this need to be able to ask questions. They need to feed back their concerns. We have something called the Blue Room where we do a lot of sessions on this. So people come in and they tell us what they think. It’s a great feedback loop.

If generative AI can help, then it should be able to improve and enhance, rather than take anything away. It should be able to give us new opportunities. For me, the wonder of journalism is a human looking at a situation and telling another human or group of humans about what they have discovered and how they have reacted to it.

That’s a very, very precious thing, and if we lose that, we have lost the industry, really. We need to keep that really at the forefront of our imaginations, in our minds, as we do this.

Is there a world where journalism, publishing, and AI all live together harmoniously?

There should be. One of the things that AI could do is add value because it can change modes, so it can take text into voice and voice into text.

You might not have any spare capacity, and that’s giving added value to content that we’ve already paid for and we’ve already accumulated. So I think that allowing AI to do those sorts of things adds value. Something new to the offering that we have for our audiences, and that feels great, right?

There are lots of positives from that point of view. Where it becomes more difficult is the jobs that are perceived to be dull. They might be repetitive—translation, transcription. What are we losing if we let AI do all of that?

There’s an awful lot of material we create again that we would just never translate, and we would just never transcribe because there aren’t enough human efforts to do it, but let’s not lose the beauty and the subtlety of getting human translations for things that might be particularly sensitive, or beautiful, or precious.

And let’s make sure we don’t wipe out a very, very subtle and high-end human skill which is understanding another language. And again, with audience stuff, people say, “Oh, I know we can get a story, write it once and then have AI write five versions for an audience with English as a second language.” An audience that really doesn’t like words, that prefers to have bullet points and pictures or whatever. Absolutely, you can do that, and that might be useful, but I guess I’d ask two questions. One is, do you then lose touch with the audience that you’re not writing for anymore?

If we look at our wonderful journalists in the BBC, they do things like Newsround and Newsbeat. They’re writing for specific audiences. We need them to know what the language is and what the idioms are and how that audience responds to being told stories in a different way. Language changes and dates really quickly.

Secondly, if you don’t understand that audience, is that a problem for you long term as an organisation? I don’t think we should use AI as a proxy to communicate with people that we could otherwise make an effort to communicate with better.

What in your journalistic background do you find yourself drawing on a lot when it comes to AI?

If you’ve worked in a newsroom like the ones here, you have an absolute passion for fact, and for truth, and for accuracy. I think we are as a society losing grip on facts and truths, so going back to journalism every so often reminds me of how facts are not only important and not only the absolute currency in journalism, but also a basic human right. Journalism shouldn’t have to be fighting AI for that. You should be enlisting it in the cause of making it better. 

———————————————————-

Republished courtesy of Bright Sites, the creator of the FLOW digital publishing platform which incorporates a data-driven approach combined with machine learning, AI, e-commerce, subscriptions and translation. Bright Sites developed FLOW, a software-as-a-service digital publishing CMS, that provides multi-location newsroom workflows, multi-lingual content creation and AI to personalise the user experience used by a range of global and local publishers. Clients include Irish Independent, The Independent, London Evening Standard.