AI is both a risk and an opportunity for journalism, with more than half of those surveyed for a new report stating they had uneasiness about its moral implications on their work.
While 85 percent of respondents had experimented with productive AI such as ChatGPT or Google Bard for assignments including writing summaries and developing headlines, 60 percent said they also had reservations.
The study, taken out by the London School of Economics’ JournalismAI initiative, surveyed over 100 news organizations from 46 nations regarding their usage of AI and associated technologies between April and July.
The researchers said in a statement that more than 60 percent of respondents stated their considerations regarding the moral implications of AI on journalistic values including accuracy, fairness transparency, and other characteristics of journalism.
Journalism around the globe is going through another period of exciting and risky technological change, added report co-author and project director Charlie Beckett.
The new productive AI tools were both a “possible risk to the integrity of information and the news media” but also an “amazing opportunity to make journalism more efficient, effective and trustworthy”, the study revealed.
Journalists acknowledged the time-saving advantages of AI with tasks such as interview transcription.
But they also reported the need for AI-generated content to be reviewed by a human “to mitigate possible harms like bias and inaccuracy,” the authors said.