Getting your Trinity Audio player ready...
|
Fake news is a major issue. Not only are publishers and companies like Facebook and Google being forced to rethink what their role is in the ever-present spread of misleading ‘clickbait’, but brands are also getting caught in the crossfire. As a result, recent research found that more than half (55%) of programmatic decision-makers have put pressure on their tech partners to proactively screen for fake news so that their ads do not appear alongside this type of content.
“Research shows that fake news spreads faster and quicker than true and factual content,” says Anant Joshi, chief revenue officer at Factmata. “This presents a real problem for brands, because although the percentage of sites containing fake or misleading news may be small, there is a much higher chance of this type of content going viral. And consumers are extremely vocal when it comes to expressing their dislike of brands whose ads appear against fake news articles. The Stop Funding Hate campaign on Twitter has more than 92,000 followers and calls out specific examples of ads appearing alongside inappropriate content. This is a real headache for brands as the reputational damage can be significant and long-lasting.”
What is fake news and how did we get here?
While the term ‘fake news’ has gained prominence since 2016, it isn’t necessarily a new phenomenon. The issues of bias, inaccuracy and propaganda have been around for years, but the US election and Brexit brought them into sharp focus with both events demonstrating how strategically placed, biased or plainly wrong stories might have influenced people’s voting in those pivotal referendums.
“Facebook has defined three different types of fake news,” explains Jamie Bartlett, author, journalist and director of the Centre for the Analysis of Social Media. “You’ve got people who are essentially in good faith, or in error sharing stories that are actually provably wrong. You have people who are knowingly and intentionally sharing misinformation such as propaganda. And then you have another group of people that are monetizing this by intentionally producing stories that are completely made up but that they think will get clicks. There’s a lot of variation here but it’s all being called fake news, and all needs to be tackled in different ways.”
If it’s been around for so long why are we making a big deal about it now? “The democratization of the media has escalated the issue,” says Bartlett. “We used to have professional journalists who – while you might not have always thought they did the best job – had some professional qualifications, a code of conduct about how they worked, and editorial guidelines they had to follow, plus they were regulated. Today the media doesn’t really look like that in many places, so it’s definitely a more fertile environment for the production and spread of fake news. The other point is that it can look a lot more believable now, especially with the ability to fake video and images.”
Programmatic has also played its own part in the growth of some types of fake news by making it easier for rogue sites to make money. Advertisers don’t really know what they’re actually getting or who’s seeing what they’re putting out, and this has incentivized a lot of smart people to realize that all you need to do is run content that people click on – and as any publisher will tell you, the more “attention grabbing” the headline, the more clicks something will get. “The programmatic ads that no human moderator is checking and are running purely on an impression or a click have certainly fed into the rise of fake news,” adds Bartlett.
What are the solutions out there?
James Harding (ex Head of BBC News and ex Editor of Times) in his recent London Press Club lecture said, “If the tech companies put their minds – not to mention their engineering might – to it, they could detect and deter such behaviour online easily. Fake news should be like spam in the early days of the internet – a nuisance but fixable.” This is perhaps an ill-judged comment given the fact that spam is still a huge problem.
So can technology really solve the problem?
Factmata, recently closed a seed funding round of $1 million and now offers a community-driven AI platform that sets out to fix the problem of fake news across the whole of the media industry, from the spread of biased and incorrect clickbait on aggregating platforms, to the use of ad networks to help disseminate that content.
“If fake news can be reliably detected and the source of the advertising can be cut off, that will fix a large part of the problem. The issue is reliably detecting this content because the language used can be very subtle so looking for individual ‘problem’ keywords isn’t enough anymore,” says Factmata’s Joshi. “It is certainly possible to detect a lot of it – artificial intelligence (AI) can process millions of URLs per day which just isn’t possible to do manually. Our technology uses a form of AI known as natural language understanding, which was first developed in the 1950s and has been continually improved since then so it’s highly sophisticated. There are other companies offering brand safety via contextual keyword blacklisting, but these are increasingly unlikely to be able to detect ‘grey areas’ and content which does not contain any outright slurs but is still considered hate speech.”
Another platform tackling the issue is Truly Media, a web-based collaboration platform designed to support the verification of digital (user-generated) content residing in social networks and online. It has been developed in close collaboration with journalists and human rights investigators.
“Fake news is most often compelling, emotive storytelling backed by digital assets and tools to embellish and disseminate. While it’s almost impossible to identify by eye, the devil is in the detail,” says Manish Popat, head of business development at Ingenta, which distributes Truly Media in the UK. “With the integrated tools within Truly, you can tell if there is a discrepancy between where someone reports they are, and where they actually are (based on geo-location). You can analyse videos and images with image processing functionality to see exactly how that media might have been doctored. You can also compare primary sources (based on publication times) to highlight where accounts diverge.”
Popat adds, “Whilst this is a great leap forward for verification, we need more than just technology. The issue changes to how we apply significance and weighting to the results of our analysis, which still requires the input of the computer in our heads. If you know an article has come from an untrustworthy source, and some of the images associated have marks of doctoring, how much risk for your brand is there in your ads showing next to that content? Only you can decide that.”
Elsewhere, organizations like Full Fact, take a slightly different approach. Full Fact is a charity that provides free tools, information and advice so that anyone can check the claims we hear from politicians and the media. According to Bartlett, Full Fact are looking to build a database of data and claims they’ve tested, so that they can automatically identify when one of those falsehoods has been used in the media or on social platforms.
What’s next?
There will always be bad actors online and they will continually find new ways to try to outsmart the technology that threatens their revenue or ability to get their false messages widely disseminated. Both publishers and programmatic platforms need to try and keep pace with these changes.
For now, Ingenta’s Popat offers some immediate practical steps for the industry, “Programmatic platform providers need to be either incorporating verification technology into their products, or using verification tools to pre-analyse websites that ads may appear on, so that advertisers are in control of the risks they’re comfortable taking.”
He continues, “Although advertisers may not know exactly where their programmatic ads are showing, they can at least set the risk level. It’s a bit like holding an investment portfolio with variable risk options. You would never hand over your money to someone who couldn’t tell you the riskiness of how it will be invested – so why should anyone do that with their advertising spend?”