AI is not only a tool for media outlets, it is also a crucial reporting field
Since the spring of 2022, images created with tools such as Midjourney and DALL-E have begun invading social network timelines and newspaper articles. Since the fall of 2022, the same has happened with screenshots of chats from conversations with ChatGPT.
AI has been around for a long time, but now these services have gone viral and became much more popular (albeit always for some niche audience).
These technologies traditionally made headlines when something dramatic happened: for example, when, in 1997, Deep Blue by IBM defeated for the first time in history the reigning chess world champion, Garry Kasparov. But, beyond the single exceptional event, those who follow the world of chess know that the progressive change introduced by chess engines has also progressively changed the approach to this discipline. Up to today: engines are more potent than any human being and have become an integral part of the preparation for competitions of any level (as well as being used for cheating. But that’s another story).
We have to choose: is one striking case more important than the impact that, over time, that case will have on all of humanity? As a slow journalist, I have no doubts: I understand the fascination for the single piece of news everyone talks about, but I am convinced that we must deal with the foundational issues.
It’s the usual dilemma of traditional journalism: if we focus on the exceptional and the new, we lose sight of the noteworthy.
The surge in interest presents an opportunity for journalists to dive deeper into AI, exploring its implications and effects on various aspects of society. However, reporting on these rapidly evolving technologies also brings with it a unique set of challenges and responsibilities.
As Mattia Peretti, former JournalismAI Manager at Polis Department of Media and Communications said in our conversation about policies for the newsrooms using AI, not every single journalist needs to have advanced technical skills.
But it’s mandatory to have consolidated knowledge for those who want to report AI.
Without this knowledge, journalists may end up regurgitating press releases that glorify AI tools or, conversely, report fear-mongering stories akin to the Terminator. They may also fall into the trap of uncritically adopting fan-like stances or selecting experts based on their biases. They can corroborate unhelpful narratives like “I Interviewed ChatGPT: What It Said To Me Was Scary” and similar clickbait headlines. They can become the parrot of CEOs: Elon Musk has stated every year since 2015 that autonomous vehicles are coming for real “this year” (or something similar). And every time, he made the headlines.
Andrea Daniele Signorelli, an Italian journalist covering several topics related to technology innovation for Domani, Wired, Italian Tech, and other Italian media outlets, agrees: “At a minimum, we should know how these tools work and familiarise ourselves with them; we should know the difference among Symbolic Artificial Intelligence and deep learning, for example. This type of knowledge protects us from falling into the traps of certain messages from supposedly authoritative people. When Elon Musk warns of the dangers of AI, he does so starting from ideological, not technical elements”.
“AI,” continues Signorelli, “poses severe and concrete risks, such as discrimination, impact on the world of work, and surveillance, but it has nothing to do with the trendy idea of existential risk”.
Let’s discuss the essential knowledge journalists must possess, how they can help democratise AI tools, and strategies for avoiding sci-fi biases and marketing hype in their coverage of this emerging field.
1) Getting familiarisation with AI history and machine learning techniques
Journalists should learn about different methods employed in training algorithms, including supervised, unsupervised, semi-supervised, reinforcement, transfer, and active learning. Also, understanding popular ML architectures, neural networks, deep learning, generative adversarial networks, convolutional nets, recurrent nets, and transformers could help.
If you want to start from Large Language Models and ChatGPT – which is perfectly understandable – one of the best readings you can do is “What Is ChatGPT Doing… and Why Does It Work?”, a fantastic divulgation article by Stephen Wolfram, mathematician, and founder of Wolfram Research. Another fundative work is the book The Shortcut – Why Intelligent Machines Do Not Think Like Us by Nello Cristianini, professor of AI at the University of Bath.
These are not mainstream readings, but, as Signorelli argues, “It is important to be able to find and follow competent and critical voices, even if they are not necessarily the mainstream ones.”
2) Improving knowledge of data and datasets
Understanding the importance of data quality and quantity in language model development, journalists should study commonly available public datasets and know how to access them. They also should get acquainted with typical metrics and benchmarks utilised in evaluating the performance of trained models on particular tasks.
How to do that? One of the best ways is using open tools like Open Assistant. Open Assistant is an open-source AI assistant that uses, and trains advanced language models to understand and respond to humans. But, since it’s open and thanks to the way the developers structured it, you can work with it – and help improve it – choosing tasks, labelling answers, and understanding mechanisms that are opaque in other tools like ChatGPT.
3) Understanding business models and motivations behind AI companies
Since AI is fast becoming a significant driver of economic growth and competition between nations, comprehending the financial mechanisms underlying its development and distribution becomes crucial.
This includes not only knowing different monetization schemes – such as subscriptions, freemium, licensing, enterprise solutions, API integrations, consulting services, ad revenues, affiliate partnerships, hardware sales etc. – but also understanding who has the power and the money to invest in these technologies, accepting enormous loss of cash at the beginning and for years.
Additionally, journalists should recognise possible conflicts of interest arising from investments, shareholder influence, government contracts, political lobbying etc.
4) Understanding true potential consequences of AI systems vs. science fiction
With the rapid advance of AI technology comes increased concern about its impact on employment, privacy, security, bias, culture, accountability, explainability, fairness, inclusiveness, environmental sustainability, geopolitics, and governance.
We don’t need to imagine Terminator and Skynet to find the challenges, the potential problems, and the possible solutions.
5) Being sceptical and continuing to learn
As AI is an ever-evolving field, journalists should be encouraged to stay up-to-date with the latest advancements and research. This will help us maintain a well-informed perspective on AI-related topics and provide accurate and insightful coverage.
The more you go deep with this issue, the more you build your own bibliography and network of sources. “The important thing,” concludes Signorelli, “is to remain sceptical, completely sceptical, of any material you receive, even if it concerns studies, research, papers: the first step is always to understand who financed them and for what purposes, to avoid, as journalists, to fall into the hype of marketing, exaltation or apocalyptic visions”.
AI is a complex field to report. Collaborating between journalists and AI experts, researchers, and ethicists is vital to ensure balanced reporting.
This piece was originally published in The Fix and is re-published with permission.