OpenAI’s adversarial threat report serves as a call for more extensive data sharing in the future. In the realm of AI, independent researchers are compiling databases of misuse, such as the AI Incident Database and the Political Deepfakes Incident Database, to enable comparisons of different types of misuse and track changes over time. However, detecting misuse can be challenging from an external perspective. As AI tools become more sophisticated and widespread, it is crucial for policymakers to understand how they are utilized and exploited. While OpenAI’s initial report provided overviews and examples, expanding data-sharing partnerships with researchers to offer greater visibility into adversarial content or behaviors is a crucial next step.
Combatting influence operations and AI misuse also requires involvement from online users. The impact of such content relies on people seeing it, believing it, and sharing it further. In a case disclosed by OpenAI, online users identified fake accounts utilizing AI-generated text.
In our research, we have observed Facebook communities actively identifying AI-generated image content created by spammers and scammers, aiding those who may be unaware of such technology in avoiding deception. A healthy level of skepticism is increasingly valuable: taking the time to verify the authenticity of content and individuals, and educating friends and family about the prevalence of generated content, can empower social media users to resist deceit from propagandists and scammers.
OpenAI’s blog post highlighting the takedown report succinctly stated: “Threat actors operate across the internet.” Accordingly, we must also act across the internet. As we enter a new era of AI-driven influence operations, addressing common challenges through transparency, data sharing, and collaborative vigilance is essential to cultivating a more resilient digital ecosystem.
Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), focusing on the CyberAI Project. Renée DiResta is the research manager of the Stanford Internet Observatory and the author of Invisible Rulers: The People Who Turn Lies into Reality.