According to a professor at Lancaster University, Artificial Intelligence (AI) and algorithms have the potential to be used for radicalization, polarization, and the spread of racism and political instability.
Professor Joe Burton, an expert in International Security, argues that AI and algorithms are not only tools used by national security agencies to prevent malicious online activity, but can also contribute to polarization, radicalism, and political violence, posing a threat to national security.
He also suggests that the securitization processes, which present technology as an existential threat, have played a significant role in the design and use of AI, leading to harmful outcomes.
Professor Burton’s article, titled “Algorithmic extremism? The securitization of Artificial Intelligence (AI) and its impact on radicalism, polarization and political violence,” has been published in Elsevier’s Technology in Society Journal.
“While AI is often portrayed as a tool to counter violent extremism, it is important to consider the other side of the debate,” says Professor Burton.
The article examines the securitization of AI throughout its history, as well as its portrayal in media and popular culture. It also explores contemporary examples of AI causing polarization, radicalization, and political violence.
One example cited in the article is the film series The Terminator, which depicted a holocaust caused by a sophisticated and malignant AI, contributing to the fear that machine consciousness could have devastating consequences for humanity.
Professor Burton writes, “This lack of trust in machines, the associated fears, and their connection to threats against humankind have led governments and national security agencies to influence the development of AI in order to mitigate risks and harness its positive potentiality.”
The article also discusses the role of autonomous drones in warfare and the use of AI in cyber security, particularly in (dis)information campaigns and online psychological warfare.
Examples like Russia’s interference in the US electoral processes and the Cambridge Analytica scandal demonstrate the potential of AI combined with big data to create political effects centered around polarization and the manipulation of identity groups.
Additionally, the article highlights the concerns over privacy and human rights raised by the use of AI in tracking and tracing the virus during the pandemic.
Professor Burton argues that problems exist in the design of AI, the data it relies on, how it is used, and its outcomes and impacts.
The paper concludes with a message to researchers working in cyber security and International Relations, emphasizing the need to understand and manage the risks associated with AI, and to consider its social effects rather than treating it as a politically neutral technology.
Lancaster University, recognized by the UK’s National Cyber Security Centre, is investing in cyber security education and research to develop the next generation of leaders in the field.
In addition to its certified Masters degree in cyber security, the university has launched an undergraduate degree in cyber security and a Cyber Executive Masters in Business Education.