While AI is sometimes employed for deception and widespread misinformation, it offers the potential to enhance storytelling for journalists and transform how readers consume news.
On March 25, an AI-generated image of Pope Francis wearing an oversized white puffer jacket made the internet’s collective head spin. It made the rounds on social media millions of times as some online communities debated the authenticity of the picture, while others questioned the sanctity of the Vatican. A pope in a puffer jacket was a bridge too far for AI, it seemed.
The picture, captioned “Pope Drip”, raised serious concerns about the worrying advancements in AI and the very real potential for deception, widespread misinformation, and worse, replacement of working professionals. “Unfortunately, we are witnessing a rise in manipulated content, including deepfakes, audio fakes, and fake images, which can be used for various purposes. While automated AI tools like ChatGPT assist many journalists, there is still widespread abuse by content creators,” said Silas Jonathan, a lead researcher at Dubawa—a fact-check organisation. Jonathan has warned that AI-generated misinformation could spread uncontrollably—risking time and money in the fight against it.
AI-Generated Image of Pope Francis generated much buzz on the internet
With the rise of any technology, challenges follow, and AI is no different. Despite the formidable spread of AI-generated misinformation, Adrian Ephraim, Editor-in-Chief of TechCabal, believes that AI is not the problem. It’s just a tool. “People’s intentions are the problem, and if there are enough people who want to create untruths and deceive people, it’s easier for them now,” he said. “We as media and news audiences need to value truth and place a premium on it—by supporting media outlets that have proved their credibility through their work and the presentation of indisputable facts.”
Jonathan shares the same view. He believes that despite the advancements in AI technology, people tend to gravitate towards sources that align with their existing beliefs and perspectives. “When it comes to trust in news, audience choices regarding what and where to read are often influenced by their confirmation biases, regardless of the role of AI,” he said. “This human tendency to seek out information that confirms preconceived notions can persist even in the presence of AI-driven recommendations or fact-checking tools.”
Readers have a role to play in combating misinformation
To curb misinformation, Jonathan believes it is up to news readers to select dependable news sources. “While there is no doubt that AI can limit the quality and accuracy of news, it may not entirely supersede the impact of confirmation biases on audience reading preferences,” he said.
Ephraim believes it is the responsibility of the media and the audience to protect the integrity of the news. “The media has to be on guard all the time to assess, evaluate and question everything. Our senses as journalists need to be heightened and aware of all the possibilities. Seeking truth has never been more important than it is now,” he said. “But the good news is that we have better tools to do better journalism, find deeper truths and search for more answers.”
For Tshepo Tshabalala, manager and team lead of JournalismAI at the London School of Economics and Political Science (LSE), the solution to curbing misinformation will not be found overnight. “Technology is changing fast, and newsrooms will need to adapt and keep up with this changing technology and find ways of combating fake news and misinformation. The solutions will not be found overnight,” he told TechCabal.
AI offers perks to readers and journalists alike
While news readers might be wary of AI contributing to misinformation, fake news and wrong data usage, AI also offers readers and journalists new advantages. According to experts, AI will enhance the way journalists do their job, and improve the way audiences consume news and protect them from misinformation.
“For busy individuals who do not have the time to read full reports, AI can assist in summarising and extracting key information from lengthy articles, saving readers time and allowing them to consume more information in a shorter period,” says Jonathan. He also believes AI-powered recommendation systems can personalise content based on individual preferences, delivering more relevant and engaging articles to readers. This helps them discover new topics of interest and encourages them to explore a wider range of content.
At the peak of the Ukraine-Russia war in 2022, Finnish public broadcaster Yle which previously published news in Finnish, Swedish, English, and Russian, used AI to translate its articles to serve the Ukrainian audience. Similarly, the Guardian and Agence France-Presse (AFP) collaborated on an AI tool that could accurately extract quotes from news articles and match them with the right source.
The future of AI in African newsrooms
Google recently unveiled a new AI tool for writing news articles. The product, pitched as a helpmate for journalists, represents a way forward for incorporating AI into newsrooms. But Tshabalala believes it’s still early days for the adoption of AI in newsrooms. “We [JournalismAI] are currently working on a global survey on how newsrooms across the world are leveraging the power of AI, and there’s a huge focus on newsrooms across Africa. We hope the insights in this report will hint at what the future looks like for AI in African newsrooms,” he added.
Ephraim believes AI is going to play a collaborative role with African newsrooms. “Many newsrooms in Africa are under-resourced and incapable of telling all the stories they want to. I see AI playing a collaborative part with African newsrooms to fill the gaps and support newsrooms. AI tools may help African newsrooms confront some of their challenges, like technical skills shortages, access to data, and training,” he concluded.