Amid widespread concerns that generative AI could play a significant role in influencing major elections across the globe this year, Meta Platforms has reported that the technology has had limited impact on its platforms, Facebook and Instagram. According to Nick Clegg, Meta’s president of global affairs, the attempts to spread propaganda and false content using AI tools largely failed to gain a substantial audience.
During a press briefing on Tuesday, Clegg revealed that coordinated networks of accounts attempting to spread disinformation were unsuccessful in utilizing AI effectively. “The volume of AI-generated misinformation was low, and we were able to quickly label or remove the content,” Clegg said. Despite concerns over AI-generated fake content, including deepfake videos and audios, the efforts to mislead the public had little effect. Notably, deepfakes, such as a manipulated video of President Joe Biden’s voice, were swiftly debunked by misinformation experts.
Meta’s ability to quickly combat AI-driven misinformation highlights the company’s ongoing efforts to maintain the integrity of its platforms. Clegg also pointed out that disinformation networks have increasingly shifted their focus to other social media platforms with fewer safeguards, or are operating their own independent websites to avoid detection.
This update from Meta comes amid growing attention to AI’s potential to influence public opinion, especially as misinformation and disinformation continue to spread across digital platforms. However, experts note that AI-generated content has yet to significantly alter public sentiment or sway elections.
Although Meta’s moderation systems were effective in removing AI-generated false content, the company has faced criticism for its content moderation practices. Meta has slightly relaxed its stricter content moderation policies used during the 2020 U.S. presidential election. Clegg acknowledged that the company received feedback from users who felt their content was unfairly removed and stressed that Meta aims to strike a balance between safeguarding free expression and enforcing its rules with greater precision.
“We feel we probably overdid it a bit,” Clegg said, referring to the company’s previous approach. “While we’ve been really focusing on reducing the prevalence of bad content, I think we also want to redouble our efforts to improve the precision and accuracy with which we act on our rules.”
This shift in policy is also seen as a response to pressure from some Republican lawmakers, who have raised concerns over what they believe is censorship of certain viewpoints on social media. In August, Meta CEO Mark Zuckerberg addressed these concerns in a letter to the U.S. House of Representatives Judiciary Committee, where he expressed regret over some content takedowns that were made under pressure from the Biden administration.
Meta’s report and the changes in its content moderation practices reflect the ongoing tension between managing the spread of harmful content and ensuring free speech on its platforms, particularly as the role of AI in shaping public discourse continues to evolve.