Digital Innovation Digital Publishing
7 mins read

How can journalists help solve the global problem of deepfakes?

The technology of deepfakes has come a long way since that groundbreaking video of former president Barack Obama mouthing words he never actually uttered first did the rounds.

That video signaled a revolution in the entertainment industry, and opened up countless possibilities of media synthesis to create hyper-realistic characters. But what happens when this branch of synthetic media is abused?

Clearly it’s the potential of deepfakes that proves troubling. The ease with which it now seems possible to ruin reputations, create fake news or hoaxes and even financial fraud is deeply worrying – particularly as the genie’s now out of that proverbial technological bottle. And, while this AI-technology is still in its early stages of development, there is more than enough cause for concern considering the power it has to sway political elections and pollute public discourse. 

As Voltaire (and more recently, Spiderman’s Uncle Peter) once said: with great power comes great responsibility. The only question is: are deepfakes an occupational hazard for us journalists, and if so, what can we do to handle this peculiar influx of unconfirmed information? 

When seeing and hearing is no longer believing

First off: what the heck is a deepfake, anyway?

Well, the term originated on Reddit sometime around the end of 2017, when it was popularized by the face-swapping tech to add Nicholas Cage to various random movies (and no, we don’t mean Face/Off).

Content fabrication and decontextualization has been around on social media for quite some time now. But the issue with these AI-based videos – which can literally put words in people’s mouths – is that they make it harder and harder for readers to separate fact from fiction. 

This creates a problem which isn’t necessarily technical in nature, but rather one that pertains to trust in information and journalism. Without that trust (which is already on shaky legs), deepfakes might just destabilize the traditional balance of evidence and truth – a ship that many believe has already sailed.

The issue became even less straightforward when Reuters’ Britt Paris and Joan Donovan argued that even low-tech variants – also known as ‘cheapfakes’ – can be just as damaging as deepfakes. This is due to the fact that today’s online audiences have a proclivity to share anything that supports their worldviews or beliefs, regardless of its authenticity (or lack thereof). 

So the crux of the issue is this: as deepfakes become more and more integrated into our information ecosystem, the more time and energy newsrooms are going to have to find to deal with the fallout.

The main challenge in dealing with fabricated content in a fast-paced news ecosystem is that the costs of doing “conclusive forensics” are going to increase. To top it all off: calling a deepfake by its name – a deepfake – may be interpreted (as Paris and Donovan argue) as a highly political act, meaning that newsrooms will require more time than usual to get their facts together before publishing their findings.

Viralized hype and disinformation are a problem that even established newsrooms and fact-checkers cannot always combat effectively, and especially during the first few critical hours after a deepfake video is released. Moreover, the world is teeming with news websites with less rigorous standards that publish half-formed assessments solely for the sake of being among the first to report. The consequence of this practice is that they wrest control over those who might be better equipped to manage the controversial narrative responsibly, thus further perturbing public opinion. 

That politically-charged back-and-forth dynamic between what is real and what isn’t will most likely sap the already dwindling credibility of newsrooms – something which may mislead even the canniest of audiences.

So, considering the potential chaos that could ensue from this tech, is there anything to be done? Is anything being done?

Fortunately, yes.

Social media giants vow to ban impersonation content, including deepfakes


Remember when we told you that the term ‘deepfakes’ originated on Reddit? Ironically enough, this social network is now among the first to ban them.

In fact, according to an article on Digital TrendsReddit’s latest policy vows to ban “everything from deepfakes to anyone making false claims about their identities” (though there are  exceptions for parody and satire – Onion readers, take a breath). Considering the 2020 elections are close at hand, this is surely reassuring news: this kind of tech has the potential to be used to nefarious and alarming ends and the ease with which one could impersonate key figures in politics and media during this sensitive time is worrying

Facebook Inc. also plans to ban some deepfake videos, according to a Fortune article. After the user data privacy scandal, they are keen to demonstrate that they’re taking the issue seriously – and again with the Presidential election looming, there’s a timeframe to be aware of.

Monika Bickert, Facebook’s global vice president of policy management, explained the situation  in a blog post: “While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases.”

There is another apparent reason why Facebook has devoted its time and resources to handle deepfakes: two high-profile examples went viral on its network. One is an altered video of House Speaker Nancy Pelosi, while the other depicts Mark Zuckerberg as he brags about his plan to rule the world. The latter hasn’t been removed due to the fact that it didn’t violate any community guidelines, but the words coming out of Zucks’ mouth raised many eyebrows nonetheless. 

Nico Fischbach, a global chief technology officer from cybersecurity company, Forcepoint, described Facebook’s new policy as a necessary move which may light the way for other social platforms too. “In general, people link it to the US election, but at the end of the day this [deepfakes] is going to be used a lot for social engineering,” says Fischbach. “It’s not a US problem. It’s a global problem.”

So what can we journalists do to help solve this global problem?

Fulfilling the role as a counterweight to disinformation

According to Nieman Lab article that delves deep into the nooks and crannies of deepfakes and their societal implications, journalists will have a critical role in placating these potential disinformation skirmishes. They suggest three things to start with (here with our own little commentaries):

1. Accelerating the ability to come to accurate conclusions

The development of synthetic or tampered media has reached a point where newsrooms will require a technical cadre that’s skilled in recognizing and confronting any emerging deepfake tech. Even though some videos may be amateurish and easy to disprove, the rapid advancements in this technology are increasingly making the detective job (and we mean this literally) seem more like a game of cat and mouse.

A good start would be to collaborate with the research community that is already well-versed in detecting deepfakes as well as journalists who are no strangers to confronting this type of content out in the open. Combining the strengths of these profiles and providing access to the latest tools for fact-checking might give newsrooms the upper hand, especially when it comes to battling more sophisticated forms of disinformation.

2. Make the ongoing documentation public

Considering that reports about deepfakes can take days, weeks, and even months before they reach their full conclusion, it seems wise for newsrooms to start reporting on their investigation process too (and not just the end result). By providing findings and relevant information during each step of the verification journey, they can both keep their readers up to speed, but also demonstrate the solid journalistic processes that ultimately build trust.

This “transparency into forensic decision-making” is something we’ve talked about before on these pages and it’s a practice that is particularly useful in cases of high-profile or complex fakes where every minute spent is often one snowflake away from starting an unnecessary avalanche. 

3. Finding the contextual clues

Accusing someone of spreading disinformation or ruining someone’s reputation requires solid proof – something which starts by searching for those good folk at Harvard term “contextual clues”. Nieman Lab suggests asking the following questions:

  • How did the deepfake video appear online?
  • Did that content appear as part of a coordinated campaign or independently?
  • When and where were the events depicted by the media purported to have taken place?
  • Are there other media corroborating the video or image in question?

Even though deepfakes can be found lurking in virtually every corner of the internet, in most cases, social media platforms are the number one place to start looking the answers. Just remember, the first step in all of this is to become aware of the nature of this technology – so much so that they know what types of content it can or cannot create.

Should you deepfake it until you make it?

Last October, the US Senate passed a bill called the Deepfake Report Act of 2019, which aims to combat the forged media through comprehensive research and technological assessments. In China, deepfakes have already been made a criminal offense – and many countries are considering similar legal countermeasures.

It goes without saying that such regulatory moves reflect the seriousness of the situation, but will that curb the deepfake threat? Far from it. In fact, fakes are expected to grow in 2020, and the societal implications are tremendous – especially with the elections on the horizon.

There’s not much to be done about that. The technology is out there, developing further with each passing day. In a year with such high profile political races, it seems likely news of deepfakes is only going to appear more commonplace. The danger of deepfakes is that it pollutes the information pool and weaponizes untruths. The hope may be that as the reading public become aware of the scope of the issue, they will become more aware of how to read it.

As ever, our job is not just revealing the story: it is about educating on the process of that discovery too. Deepfakes might become more commonplace, but at least – if we’re doing our jobs correctly – people will understand why that’s a problem.

by Em Kuntze

Republished with kind permission of Content Insights, the next generation content analytics solution that translates complex editorial data into actionable insights.