Getting your Trinity Audio player ready...
|
Steven Brill, a Co-founder of American Lawyer magazine, and former Wall Street Journal Publisher Gordon Crovitz recently launched NewsGuard, a browser-based tool for curbing fake news.
Their USP is that they are relying on human, rather than artificial intelligence to do so. The effort has been backed by $6 million in funding so far and is supported by the advertising and PR agency Publicis Groupe and foundations like John S. & James L. Knight Foundation, Inc.
Our goal is to help solve this problem (fake news) now by using human beings—trained, experienced journalists—who will operate under a transparent, accountable process to apply basic common sense to a growing scourge that clearly cannot be solved by algorithms. They can do a good job identifying hate speech because you can program in a bunch of words to look for. But they’ve found that it’s impossible to deal with fake news using artificial intelligence. If they could do it, they would have.
Steven Brill, Co-CEO, NewsGuard
Artificial Intelligence: Terrible at spotting fake news
Brill may be on to something, as a recent study by MIT researchers has found that even the best AI for spotting fake news is still terrible. They tested over 900 possible variables for predicting a media outlet’s trustworthiness—probably the largest set ever proposed. The researchers then trained a machine-learning model on different combinations of the variables to see which would produce the most accurate results.
The best model accurately labeled news outlets with “low,” “medium,” or “high” factuality just 65% of the time. The study also mentions that “labeling media outlets with high or low factuality must be done by professional journalists who follow rigorous methodologies.”
NewsGuard has a staff of 40 reporters and dozens of freelancers who have already evaluated over 4,500 websites, from The New York Times and CNN, to Infowars. According to Brill, the sites they have analyzed account for the top 98% of news and information sites that Americans engage with the most.
How the rating system works
NewsGuard’s journalists and editors review and rate news and information websites based on nine journalistic criteria. These include whether the site regularly publishes false content, reveals conflicts of interest, discloses financing, or publicly corrects reporting errors.
Whenever any website fails to meet any of the criteria, the reviewers call and email the site operators to give them a chance to comment. The company also has a SWAT team of sorts for immediately investigating sites that are suddenly trending but have not yet been rated.
Sites can score up to 100 points and get a green or a red rating. A green rating signifies basic standards of accuracy and accountability, while red indicates the opposite. Any site receiving less than 60 points gets marked red. The rating icons appear next to links on search engines and social media feeds, including Facebook, Twitter, Google, and Bing for users who have installed NewsGuard’s free browser plugin.
In addition to the red and green icons for news and information websites, NewsGuard assigns a blue “platform” icon to sites that primarily host user-generated content. Humor or satire sites that mimic real news are assigned an orange “satire” icon. A grey icon indicates that a website has not yet been rated.
The system is transparent about how and why a news site received its rating. It provides a ‘Nutrition Label’ for every site it rates. Its a one-pager that explains the history of the site, what it attempts to cover, who owns it and who edits it. The nutrition label also reveals other relevant factors such as financing, notable awards or missteps, and whether the publisher participates in programs such as the Trust Project, or has repeatedly been found at fault by one of the established programs that check individual articles.
There are services like Snopes and PolitiFact that have attempted to apply a human element to online fact-checking but NewsGuard’s founders say that it’s different as it uses a team of trained journalists to research news sites, and examines each outlet for its overall merit, as opposed to taking an article-by-article approach.
Green and red: Ratings do influence users
A recent study by Gallup and the Knight Foundation had researchers test NewsGuard’s rating system. They asked more than 2,000 US adults to rate the accuracy of 12 news articles on a five-point scale. Some of the participants saw articles with NewsGuard’s ratings, some didn’t.
The researchers found that subjects perceived news sources to be more accurate when they had a green icon beside them than a red icon. They were also more likely to trust articles with a green icon than those that did not have an icon.
Advertisers are increasingly concerned about their brand safety and do not want to help finance and appear alongside fake news. For us, NewsGuard is highly appealing because it is a concrete, actionable answer to our clients’ concerns – another layer of protection for them.
Maurice Levy, Publicis Groupe Chairman
Advertisers can use NewsGuard’s ratings to build a list of reliable news sites safe for advertising and to keep ads off inappropriate sites. The tool may also help publishers assert their credibility in these times when trust in mainstream media is declining.
Brill and Crovitz plan to license the system to various social media and search platforms, and other aggregators of news and information so that they can offer the ratings and Nutrition Labels with their feeds. Microsoft has already agreed to make NewsGuard a built-in feature in future products, and Brill says he’s in talks with other online titans but did not reveal details.
Download WNIP’s new Media Moments 2018 report, which dives deeper into this year’s developments in publishing, and looks at what opportunities 2019 could usher in. The report is free and can be downloaded here.