Should News Media Embrace Artificial Intelligence?
First International Charter Says Yes, with 10 Key Conditions
Newsreel Asia Insight #65
Dec. 6, 2023
Concerns about artificial intelligence (AI) in journalism eroding already fragile public trust in media have prompted the release of an international charter. This document sets ethical guidelines for AI in news media, emphasising the need for news that is accurate, diverse and ethically produced.
The Paris Charter on AI and Journalism, initiated by Reporters Without Borders (RSF) and chaired by Nobel Peace Prize laureate Maria Ressa, was recently released at the Paris Peace Forum.
First off, here are the 10 principles, briefly explained, for the use of AI in journalism to safeguard information integrity and preserve journalism's social role, as per this charter:
(1) Journalism Ethics as AI’s Compass: Imagine AI as a powerful car. The first principle says that journalists should be in the driver’s seat, steering this car with the map of journalistic ethics. This means using AI in ways that respect truth, fairness and independence, ensuring that the journey always leads to trustworthy information.
(2) Human Decision-Makers at the Helm: This principle is like saying, “Robots can assist, but humans are in charge.” It emphasises that humans should make the big decisions in journalism, not algorithms. This is like having a human pilot on a plane, even if it can fly itself.
(3) AI Under the Microscope: Before any AI tool is used in journalism, it should pass a thorough check-up, like a car going through a rigorous inspection. This ensures that the AI aligns with journalistic values and doesn’t violate privacy or other laws.
(4) Media’s Responsibility for AI Content: If a news outlet uses AI to publish a story, it’s like a chef serving a dish. The chef (media outlet) must ensure the dish (content) is good to eat (ethically sound). They can’t blame the kitchen tools (AI) if something goes wrong.
(5) Transparency in AI Usage: News outlets should be open about using AI, like a magician revealing some secrets behind a trick. This principle asks media to inform audiences when AI is used in news production, maintaining trust and clarity.
(6) Tracing Content Back to Its Roots: This is about ensuring that every piece of news has a clear lineage, like a family tree. It's important to know where the information comes from and if it has been altered, ensuring authenticity and reliability.
(7) Distinguishing Real from AI-Made: Journalists should clearly mark the line between content captured from the real world and that generated by AI. It’s like labelling photoshopped images, so viewers know what’s real and what’s manipulated in any manner, even for artistic purposes.
(8) Ethical AI in Personalisation: When AI is used to suggest news to readers, it should be like a responsible librarian, not a salesperson. It should offer diverse, balanced views, not just what you like or agree with, promoting a well-rounded understanding.
(9) Journalists Shaping AI Governance: Journalists should have a say in how AI is governed, like citizens voting on important community issues. They should help shape the rules and standards for AI, ensuring it serves the public interest.
(10) Protecting Journalism’s Integrity with AI: Finally, when AI uses journalistic content, it should respect the work’s integrity and compensate creators fairly. It’s like ensuring artists get royalties when their music is played, maintaining journalism’s economic and ethical health.
The charter, a product of collaboration between RSF and 16 partner organisations, including journalism defenders like the Committee to Protect Journalists, the Canadian Journalism Foundation, and the European Journalism Centre, was developed with inputs from 32 experts from 20 countries. These experts, specialising in journalism or AI, aimed to establish ethical guidelines for AI in news.
The charter’s preamble acknowledges AI’s transformative implications for humanity and journalism. It calls for global cooperation to ensure AI upholds human rights, peace and democracy. The document suggests that AI can assist media outlets in their social role if used transparently, fairly and responsibly.