Many publishing companies are so keen to make the most of generative artificial intelligence, they’ve failed to draw up proper guidelines for their workforce. It’s a misstep that makes it harder to really cash in on the AI revolution, says digital transformation specialist Steffen Damborg
For many publishers, integrating generative artificial intelligence into their operations has felt like building a track while a train is steaming up behind you. The pressure to make the most of AI has led to some media companies experimenting with tech, like chatbot du jour ChatGPT, without putting proper structures in place – a recent World Association of News Publishers (WAN-IFRA) study showing that, while 49% of managers say journalists are free to use new tools, only 20% have been given guidelines.
It’s an approach that needs to change (and fast) if media companies want to fully realise the enormous potential of AI and avoid pitfalls like a lack of accuracy and authenticity, according to Steffen Damborg, a digital transformation specialist and author of Mastering Digital Transformation, who works with WAN-IFRA.
“A lot of companies in the news industry are willing to test AI even though they don’t have the policies and guidelines in place – because they want to learn,” he says. “And they trust that their journalists can handle the dilemmas that occur when you start using generative AI.
“But you need top management to get an understanding themselves of what AI can do for their company and the industry and then set up guidelines and decide how we use it. It’s the management who sets the direction of the company and if the management is not coping with the new technologies that are relevant to your industry, then you won’t have direction within your company.”
A matter of trust
With concerns over inaccuracy and possible political bias swirling around ChatGPT, Damborg stresses the importance of putting human “control mechanisms” in place, especially if you’re a trusted news brand.
“You will see a lot of content being produced by AI that are just published but not by traditional publishers, because they know that trust is their currency,” he points out. “If you’re The New York Times, you cannot publish things that are not checked for biases and accuracy.
I’m not that optimistic when it comes to algorithms being able to overtake journalism (which has) accuracy and trustworthiness. But, you know, it’s early days, and we tend to overestimate the effects of technology in the short run and underestimate the effects of technology in the long run.
Finding the right balance
The key to an effective AI strategy, says Damborg, is to combine the managerial ability of a company’s top structure with the creativity of those on the shop floor trying to find the best way to make AI work for them.
“You want to set a clear direction within your company, but you also want innovation and clever use cases and that seldom comes from the top of the organisation,” he points out. “They come from the people using the technologies.
“So, on the one hand, you need to have guidelines and security measures to keep being a trustworthy news brand, but on the other hand you want experimentation in your organisation. You want journalists, software engineers and designers to investigate and, with their natural curiosity, examine these new possibilities.”
You can watch our full video conversation with Steffen below
Five key takeaways:
- Too many media organisations have not put guidelines in place on how to integrate generative AI into their operations.
- Not having a clear direction makes it tougher to negotiate AI pitfalls like a lack of accuracy and authenticity.
- Trusted news organisations have to draw up AI “control measures” to ensure their reputations stay intact.
- Concerns over AI’s accuracy and bias means it is unlikely to replace quality journalism anytime soon.
- The top structure and journalists need to work together on AI guidelines, combining management skills and creativity.