Experts are sounding the alarm over AI-generated political disinformation as voters prepare to head to the polls in more than 50 countries this year. There are concerns that malicious actors could use generative AI and social media to interfere with elections. Meta has faced criticism in the past for its content moderation policies, such as during the January 6 riot when it failed to prevent organizers from using its platforms.
Meta’s spokesperson, Nick Clegg, defended the company’s efforts to prevent violent groups from organizing but acknowledged the challenges of staying ahead in this adversarial space. He mentioned that Meta has evolved significantly in moderating election content since 2016, having removed over 200 networks of coordinated inauthentic behavior. The company now relies on fact checkers and AI technology to identify and remove unwanted groups.
Earlier this year, Meta announced measures to label AI-generated images on its platforms, including visible markers, invisible watermarks, and metadata in the image file. While these steps align with industry best practices, Clegg admitted that detecting AI-generated content is still a challenge, especially in text, audio, and video formats.
Clegg emphasized that regardless of the challenges, Meta’s systems should be able to detect and combat misinformation, using AI as both a tool and a defense. He also defended the company’s decision to allow ads claiming the 2020 US election was stolen, citing the commonality of such claims worldwide and the impracticality of revisiting past elections.
Despite concerns raised by state secretaries of state about the potential dangers of such ads, Clegg maintains that Meta’s systems are equipped to handle misinformation from any source. The full interview with Nick Clegg and MIT Technology Review executive editor Amy Nordrum can be watched below.