Getting your Trinity Audio player ready...
|
Big data is huge. Not just in terms of quantity or volume. Not even because of its relative ubiquity. As something with the potential to drastically shift the journalistic landscape, it’s massive. But can it? Will it? And, perhaps most interestingly, should it?
As our lives and our livelihoods have been increasingly interlinked with digitized information, so too has the amount of data available to us grown. We each leave a long trail of data behind us; we can likewise access vast amounts of information on practically anything.
Big data: established process, new technology
While the idea of big data is hardly a new one, the technologies enabling it have altered our relationship with it irrevocably.
The journalist and Nieman Fellow, Adam Tanner, spent time writing travel guides in Berlin in the era before the wall fell, and then later requested the information the Stasi had compiled on him during that time. There wasn’t much there: “Even with 10 men on the job one day in Dresden,” he wrote “they learned little about me.”
Fast forward thirty years and the kind of information the Stasi might have once prized is comparatively freely available. We’ve become willing to share ourselves freely and enthusiastically on social media outlets; voluntarily track ourselves using GPS-centric location map apps and log our political beliefs and affiliations on any number of publically accessible channels.
In the world of journalism, the effect has been potent. Before, the process of procurement – of searching and uncovering information – was vital to the role. This new world of big data removes the need to hypothesize: we’re no longer formulating theories and in search of data to prove them, it’s the other way around. From data we test theories. It’s both a subtle and seismic shift.
This isn’t to say that data is always easily available – of course it isn’t. It isn’t necessarily an automated process, either. That said, big data technologies provide – at minimum – an additional pair of strong hands to help in the heavy lifting.
Now, we’re facing a different problem: how to process it.
The urgent need for analysis
“The lack of journalistic insight in the Enron, Worldcom, Madoff or Solyndra affairs is proof of many a journalist’s inability to clearly see through numbers. Figures are more likely to be taken at face value than other facts as they carry an aura of seriousness, even when they are entirely fabricated.
Fluency with data will help journalists sharpen their critical sense when faced with numbers and will hopefully help them gain back some terrain in their exchanges with PR departments.” – Nicolas Kayser-Bril, Journalism+
There’s more to journalism than publishing raw data. The skill and the value lies in the ability to process that data, to find meaning in it and to present it to an audience, who are – let’s face it – already facing a deluge of information, data and content from every angle and technological orifice.
It’s not enough – and it’s not acceptable – to present data as journalism. It’s not, and it never will be. There’s a step in between that needs taking for any value to come from that raw information.
“Publishing data is news,” said The Texas Tribune’s Matt Stiles and Niran Babalola, and before you argue that we need to fire our editor for publishing contradictory opinions with only a paragraph break between them, consider their follow-on comment. “It aligns with our strategy of adding knowledge and context to traditional reporting.”
Adding knowledge. Adding context.
Journalism serves to inform, to challenge, and yes, to educate. Above all else though, it must be consumed. The best writing in the world is of no value if it goes unread. This being the case, the staggering volume of data available can pose a problem: how to deal with it without going data-blind.
This is where emerging technologies are becoming useful, if not invaluable. Far from posing a threat to journalism, big data technology should be seen as a tool, and – like a hammer – it only becomes useful when it’s wielded.
Bringing technology into [or above?] the fold
Sometimes that data might be processed through alluring visualizations and graphics. Other times data might take the form of an interactive graph or chart. Elsewhere – such as at Sweden’s Mittmedia – articles are AI-authored, so you can read regularly published pieces about local real estate trends in your area. Syllabs, in France, employ a similar strategy to author hyper-local articles about regional football matches or municipal elections. The BBC invite you to input your postcode to see local school league tables or election results as part of their broader reporting.
Post-publication these technologies (like the ones we’re working on here at Content Insights HQ) can now help publishers dissect articles and establish what ‘success’ means for their specific organization, whether the goals are to increase its user base or bolster subscriptions.
The point is not that technology is overtaking newsrooms to its detriment, but rather it is helping savvy publishers process the massive swathes of data into articles that are consumable, enjoyable and – most importantly – relevant, improving their offering in the process.
Laura Amico, speaking to a panel at MIT in 2014 said the following:
“The best data projects start with really good questions. As a journalist, I know how to ask questions. Community members know how to ask questions of their communities. We know how we want politics, schools and communities to work. It’s about building an editorial framework to answer these questions.”
So let’s make a distinction: big data technology is changing journalism, but it’s not necessarily encroaching on it. As part of a sturdy framework, big data tech is a very welcome addition.
by Em Kuntze
Republished with kind permission of Content Insights, the next generation content analytics solution that translates complex editorial data into actionable insights.