Getting your Trinity Audio player ready...
|
CNN recently reported that Finland is winning the war on fake news. It is doing so by training children in critical thinking skills that help them spot fake news. This is critical for democracy, and for news publishers, because these are the future voters and news consumers.
Some publishers are already working at building reading habits in students to ensure that they become future subscribers. It may be worthwhile to consider a strategy which includes equipping current and future readers with the ability to spot fake news.
“First line of defense is the kindergarten teacher”
The Finnish school program is a part of an anti-fake news initiative launched by Finland’s government in 2014. It seeks to teach residents, students, journalists and politicians how to identify and counter false information.
It’s not just a government problem, the whole society has been targeted. We are doing our part, but it’s everyone’s task to protect the Finnish democracy. The first line of defense is the kindergarten teacher.
Jussi Toivanen, Chief Communications Specialist, Prime Minister’s Office, Finland
The French-Finnish School of Helsinki, a bilingual state-run K-12 institution, recently partnered with Finnish fact-checking agency Faktabaari (FactBar) to develop a digital literacy “toolkit” for elementary to high school students learning about the EU elections.
It includes exercises which call for examining claims found in YouTube videos and social media posts, comparing media bias in an array of different “clickbait” articles, probing how misinformation preys on readers’ emotions, and even getting students to try their hand at writing fake news stories themselves, according to CNN.
What we want our students to do is…before they like or share in the social media they think twice – who has written this? Where has it been published? Can I find the same information from another source?
Kari Kivinen, Director of Helsinki French-Finnish School
Dismaying inability to reason about online information
Although they are “digital natives,” studies in the US and the UK have found that a lot of young people have no idea about the source of the online information, or even why they are reading it. A study by the Stanford History Education Group evaluated the online reasoning skills of 7,804 students across the US.
The researchers found a “dismaying inability by students to reason about information they see on the Internet.” They had a hard time distinguishing advertisements from news articles, or identifying where information came from.
Many people assume that because young people are fluent in social media they are equally perceptive about what they find there. Our work shows the opposite to be true.
Sam Wineburg, Lead Author of the report and founder of SHEG
The situation is no better in the UK. A report from the Commission on Fake News and the Teaching of Critical Literacy Skills in Schools, found that only 2% of children and young people in the UK have the critical literacy skills they need to tell if a news story is real or fake.
All of which makes the Finnish initiative a powerful strategy. “What we have been developing here—combining fact-checking with the critical thinking and voter literacy—is something we have seen that there is an interest in outside Finland,” says Kivinen.
Representatives from many EU states, as well as from Singapore, have come to learn from Finland’s approach to the problem. And that may be one of the biggest signs that Finland is winning the war on fake news.
Stanford also offers an online program for educators that offers free lessons and assessments to teach students how to evaluate online information.
“Better understand the tangled landscape of information online”
Further, the findings from two studies conducted across 3,446 participants, suggests that “susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se.”
And that’s something the NYT’s The News Provenance project looks set to address. The publisher has been experimenting with blockchain technology over the past year to make its data (beginning with images) tamper-proof. At the same time, it will be offering additional contextual information to readers, making it easy for them to distinguish fake from genuine.
Sasha Koren, Project Lead, The News Provenance Project, refers to how The Guardian changed the way the dates of its old articles are displayed. The publisher did so after it observed spikes in traffic on stories about years-old events, that had been shared as new, and with incorrect context, on Facebook. The News Provenance project seeks to go further in trying to make the origins of journalistic content clearer to audiences.
The publisher is using blockchain because the technology makes the records of each change traceable. Any updates to what is published are recorded in a sequential string (or “blocks” in a “chain”) with the string of those changes adding up to create a provenance.
It has begun with NYT’s photojournalism, because photos can be easily manipulated and circulated widely online via social platforms, messaging apps or search engines.
In altering how we produce and present what we publish, news outlets may be able to help readers better understand the tangled landscape of information online, especially on social platforms and messaging apps. What if we could provide them with a meaningful way to differentiate between misleading content and credible news?
Sasha Koren, Project Lead, The News Provenance Project
The changes include “drawing more attention to details that could inform a person’s gut reaction, like age and caption of a photo, writes Emily Saltz, UX Research, Design, and Strategy at the NYT, on Medium. “We also incorporated prompts and resources to support more critical thinking, and to help people make sense of potential dissonance between a mis-captioned photo and its original context. Finally, we provided more photos and article links related to the event depicted in a photo to help people explore a story more on their own.”
Misinformation is an everyone problem
“The idea seems relatively straightforward. In an age when images are manipulated and deepfakes get more sophisticated each day, using blockchain technology to show readers and viewers where and how an image, static or moving, has been changed can be an important way for consumers to understand where the image actually came from—trusted source or not,” comments Josh Sternberg, Tech Editor at Adweek.
Marc Lavallee, Executive Director, R&D at The New York Times, told Adweek that the company is looking at how it can help build an ecosystem of solutions, not just conduct fact checks or have a reporter on the misinformation beat.
“It’s about finding multiple seeds and starting points of collaboration,” he said. “We’re trying to do two things: figure out from different angles what different parts of the solution look like, and two, the opportunity to use the name recognition of New York Times to get everyone to work together. It’s not just tech companies but other news organizations. Misinformation is an everyone problem.”
The uncommon collaboration needed to change the game
And that includes some of the world’s biggest brands like Unilever, P&G, Mars, Lego and Adidas. These companies have outlined a plan to curb harmful content online by ensuring that those spreading it don’t have access to advertising dollars.
“Along with Google, Facebook, several ad agency networks and trade bodies, around 40 household names have been involved in designing the blueprint,” reports Rebecca Stewart, Senior Reporter at The Drum.
It’s a three pronged strategy, that aims to “prevent advertisers’ media investments from fuelling the spread of content that promotes terrorism, violence, or other behaviours that inflict damage on society.”
The key tenets of the plan include:
- Developing and adopting common definitions about harmful content.
- Creating tools that let brands and media agencies take better control of where their media spend is going.
- Establishing shared measurement standards so that marketers can assess their ability to block, demonetise, and remove harmful content.
This is the first big initiative from the Global Alliance for Responsible Media (GARM), a cross-industry working group founded by the World Federation of Advertisers (WFA) in 2019. According to the association, a collaborative approach is needed to fight fake news.
Marc Pritchard, Chief Brand Officer at P&G, said, “It’s time to create a responsible media supply chain that is built for the year 2030—one that operates in a way that is safe, efficient, transparent, accountable, and properly moderated for everyone involved, especially for the consumers we serve.”
Rob Rakowitz, Initiative Lead for GARM told The Drum, “Previous approaches to harmful content have been in part a reactive game of whack-a-mole. We are convinced this uncommon collaboration is what is needed to change the game.”