Abu Dhabi: On the second day of the Global Media Congress (GMC 2024), industry experts gathered for a compelling panel discussion titled ‘AI-Proofing Media’. Moderated by Dana Alomar, Technology Editor at The National, the session investigated the pressing challenge of combating AI-driven misinformation. From innovative tools like blockchain authentication and AI-powered fact-checking to the development of ethical guidelines for AI in media, the panel offered insights for protecting media integrity in the digital age.
According to Emirates News Agency, respected speakers included Dr. Mohamed Abdulzaher, CEO and Editor-in-Chief of the Artificial Intelligence Journal; Isabella Williamson, Founder of Tyde AI; and Ludovic Blecher, CEO of IDnation. Dr. Mohamed Abdulzaher opened with a statistic from the Global Artificial Intelligence Journalism Index (GAIJI), highlighting that by the end of 2024, more than 600 AI-powered applications capable of altering images, videos, and other media had been developed worldwide
. Alarmingly, only 10 percent of these tools are managed to address fake content, highlighting an urgent need for effective strategies to safeguard content authenticity.
The group discussed the impact of the changes brought about by AI technology. It was highlighted how, although AI simplifies content production considerably, it also sparks concerns about potential declines in journalistic integrity for coming generations. Discussing solutions, the panel emphasised the role of education and clear guidelines in using AI responsibly. Ludovic Blecher stressed the importance of viewing AI as an assistant rather than a replacement for the human element. Using AI required guidelines, he said. The real challenge is overcoming human laziness, the panel observed.
Ludovic Blecher added, ‘Guidelines include taking time to validate content. Use the AI tools to check content! Then learn, take a class to learn how the AI models are working.’ Isabella Williamson brought attention to an issue regarding the use of AI in the
quality assurance space. She emphasised the importance of having an AI policy established across all organisations to prevent mishaps and ensure consistency in quality assurance practices. ‘Having an AI policy in place in every organisation should be mandatory.’
She further explained that these policies should specifically focus on aspects like handling religious content with care and addressing challenging situations in high-risk or critical environments to maintain ethical standards and accountability. She also mentioned the prejudices in AI models and highlighted how platforms such as ChatGPT often exhibit Western-centric influences in their design and data collection processes that may result in substantial contextual errors.
The group also recognised that synthetic media technology has both positive and negative aspects to it because it can be used for good or harm, depending on the application in which AI is employed-a reminder of the need for oversight and rules in the changing world of AI and media.
The transformative impact of AI on reputation management was a key focus. The staggering statistic that in October 2024, ChatGPT received 3.7 billion visits – making it the 8th most visited website in the world – underscored the measure of its impact. This astonishing reach highlights the central role AI plays in influencing public discussion and managing reputations in an increasingly digital world.
The discussion reiterated the need for proactive measures to maintain the integrity of media in the age of AI.