Collectif Top Stories
5 mins read

“Most young journalists are already using AI”: New report sheds light on the latest crop of Gen Z journalist talent

A new study of 100 journalism students and young journalists outlines what the next generation of media professionals anticipate in terms of AI and the challenges/opportunities facing them. One of the report’s co-authors, Greentarget’s Lisa Seidenberg, provides a breakdown of the report’s key findings…TL,DR: they’re already using AI.

Generative artificial intelligence is rapidly transforming news organizations’ operations—and raising serious questions about how the technology will affect journalism. Concerns include whether the same AI tools that could help short-staffed newsrooms do more with less will lead to further job losses, worries about the technology’s potential to fabricate information and accelerate the spread of misinformation, and the potential impact on how young journalists learn their craft.

Future journalists, however, are split on their views about AI and how it will impact the profession, according to a new study, Next-Gen Journalists: Navigating Misinformation, AI & The Future of Journalism, by strategic communications firm Greentarget. While nearly three-quarters of the student journalists and new professionals we surveyed expect AI to impact the industry significantly, only half believe the technology poses a threat. What’s more, the majority already use a wide range of AI tools for research, writing and editing.

Our research found that next-generation journalists see AI as a tool that’s capable of both progress and harm. Respondents’ widespread use of technology tools likely has its roots in their education and training. While AI has not penetrated the journalism school curriculum to the same extent as social media, a majority (57%) of respondents say the use and impact of AI in journalism is or was part of their education.

Some respondents, however, tend to ultimately view AI as a threat—“unnecessary in journalism,” as one said, adding “I feel if anything it spreads even more misinformation.” Another said: “We don’t need AI to write for us, we can write for ourselves. A robot should not be portraying human events and the human experience.”

Others note that AI could potentially replace journalists. “Once AI has the ability to contextualize information, reason as to why it’s important, and organize it in such a way to make it engaging and understandable—all in a matter of seconds—human journalists could become obsolete,” said one respondent.

AI is making inroads into newsrooms

The above mix of practical AI use and anxiety about future impacts comes as ChatGPT-owner OpenAI is making inroads into newsrooms. In July, the Associated Press made a deal with OpenAI to license AP’s archive of news stories going back to 1985 in exchange for access to OpenAI’s tech and product expertise. Later that month OpenAI announced a $5 million grant and $5 million in software use “credits” to the American Journalism Project, which supports nonprofit newsrooms and whose member articles will be used to train OpenAI content.

Meanwhile, Google has reportedly been presenting major news organizations, including the New York Times, the Washington Post, and The Wall Street Journal, with a new software “personal assistant” for journalists, which promises to “take in information” — details of current events, for example — and generate news content.

As these tools become more sophisticated and their usage more widespread, one thing is certain: training in how to use them properly will be crucial. 

In October, U.K.-based The Guardian accused Microsoft of damaging its reputation by publishing an AI-generated poll next to a Guardian story about a woman’s death that asked readers to speculate on the cause of her demise. Such missteps—including the “journalistic disaster” at CNET earlier this year, when the publication was forced to issue substantial corrections for several AI-written articles—demonstrate how the technology can run amok without proper oversight. 

Education, pitfalls and risks

They also highlight how important it is for future journalists to understand the pitfalls and risks as well as the potential of these tools. Hazards range from misinformation to concerns that novice journalists who use AI to generate or edit content may not get the training benefits of doing the work themselves.

“You see a lot of advice that AI can provide a good first draft, for example,” Amy Merrick, a senior professional lecturer at DePaul University’s College of Communication and former reporter for The Wall Street Journal, told us. “But the act of writing refines your thoughts, so if you skip that work, it’s necessarily going to be more shallow and you’re going to have fewer original insights.”

This education piece is imperative, given the differences between how journalists typically work and how generative AI works.

For example, the foundational techniques of journalism—boots-on-the-ground reporting, talking to experts, and synthesizing new information—require human interactions, complex problem-solving, and sound judgment. Journalists look to their networks of sources, public relations contacts, and trusted experts to verify and source informative, helpful, and timely stories. 

On the other hand, generative AI platforms use machine learning algorithms and are trained on enormous datasets that allow them to create new content modeled on what has come before in response to prompts. While this makes them a robust summarization and explanatory tool—and the technology continues to evolve new applications—whether these platforms can source further, accurate, attributable information for breaking news stories or investigative journalism has yet to be demonstrated.

It’s important to note that AI can add efficiencies to journalism—if used correctly. Our research found that many next-gen journalists are already harnessing the power of AI platforms to support their reporting. More than half of respondents (52%) use AI tools for translation, 43% use AI for writing and 39% for research, while 21% say they use AI copy-editing tools. Some say they use AI to improve production efficiency or detect and combat misinformation, including data analysis, photo processing, and video editing. 

Caution, training and guidance needed

While AI can be a helpful tool for journalists, it must be used with caution and proper training to ensure the next generation of journalists has the tools and skills they need to work with new technology without fear of making missteps along the way and encountering public backlash.

Nieman Labs recently took the time to analyze the policies and plans 21 newsrooms in the US and Europe laid out to address the use of AI, reinforcing how “the emergence of generative AI has highlighted the need for newsroom guidelines for these technologies.”

Within these guidelines, training isn’t a huge component, however, when it is referenced, it is generally linked to mitigating the risks of generative AI and being accountable and transparent towards the audience. However, according to Nieman Lab’s piece, the German Journalists’ Association (DJV) did reinforce that the use of artificial intelligence must become an integral part of the training and further training of journalists. They call upon media companies to create appropriate training that includes the misuse of AI. The Financial Times is one media company taking on this challenge as the editor recently stated they will provide training for their journalists in the form of a masterclass on the use of generative AI for story discovery.

We applaud the newsrooms and media companies developing AI guidelines and training programs for their journalists. If used thoughtfully and appropriately generative AI has the potential to support the future of journalism in incredible ways. As we note within our Report, even “journalism schools appear to have accepted that the burgeoning use of AI-enabled tools is here to stay and that being proficient at using them will be an increasingly important aspect of the profession.”

Lisa Seidenberg
Director of Media Relations, Greentarget

Lisa Seidenberg is director of media relations at Greentarget, a Chicago-based PR firm that
brings a unique mix of expertise in B2B public relations from their work with Fortune 500
companies, professional-services firms, law firms, technology companies, manufacturing
businesses, commercial real estate companies, health care organizations and financial-services organizations.