The report also warns that “customised AI malware” could potentially manipulate voting systems or misreport results. However, paper ballots and human counting protocols in the UK imply that the UK is better insulated than other countries such as the US from this particular threat.
The study calls for the Electoral Commission, Ofcom and the Independent Press Standards Organisation to provide new guidelines for media reporting on content that is either alleged or confirmed to be AI-generated. Political parties should be urged to voluntarily agree on rules for using AI in campaigning, and any AI-generated election material should be labeled as such.
According to the lead author of the report, Sam Stockwell, political parties in the UK are already in the midst of a busy campaigning period ahead of an upcoming general election. There is no clear guidance at present on the prevention of AI being used to create false or misleading electoral information.
The study also cites recent spear-phishing incidents against politicians, journalists, activists and other groups, conducted by Russia- and Iran-based actors. Furthermore, the UK government has reported that in March, a Chinese state-affiliated group conducted online reconnaissance activities against the email accounts of British parliamentarians. While no parliamentary accounts were successfully compromised, these incidents of foreign interference continue to pose potential threats to the integrity of the UK’s democratic process.
The report concludes by stating that a lack of clear guidance or expectations around the use of AI in elections could lead to the creation of false or misleading electoral information. Regulators must establish clear guidelines now to address this concern before the impact is felt in future elections.