Study Warns Highly Realistic AI-Generated Deepfakes Could Enhance ‘Electoral Interference’

A study has revealed that highly realistic AI-generated deepfakes containing false allegations about political candidates could enhance “electoral interference.” The research, carried out by The Alan Turing Institute and the Centre for the Analysis of Social Media at think tank Demos, found that in some cases, AI-generated voice clones can create false images or videos of political candidates making controversial statements or depicting them in contentious activities. Other times, they can pretend to withdraw from the election race or endorse other candidates or purportedly demonstrate ballot rigging. According to the study, some of these threats may originate from hostile actors, while others from political parties themselves. The researchers cautioned that in more extreme scenarios, “customised AI malware” could result in voting systems being manipulated or misreporting votes. However, the report noted that the UK is “more insulated against these types of threats compared to countries such as the US, owing to the continued use of paper-based voting and human ballot counting.” The report also highlighted that, upon close examination of AI-generated deepfakes, there has been no clear impact on election results. Notwithstanding this, it suggests much can still be done to enhance short-term resilience against AI-based election threats. The study called on the Electoral Commission to work with Ofcom and the Independent Press Standards Organisation to publish new guidance for media reporting on content alleged or confirmed to be AI-generated. It also recommended requesting voluntary agreements for political parties setting out how they should use AI for campaigning. Additionally, it suggested requiring AI-generated election material to be clearly marked as such. The study’s lead author, Sam Stockwell said, “With a general election just weeks away, political parties are already in the midst of a busy campaigning period. Right now, there is no clear guidance or expectations for preventing AI being used to create false or misleading electoral information. That’s why it’s so important for regulators to act quickly before it’s too late.” Alexander Babuta, director of CETaS, said, “While we shouldn’t overplay the idea that our elections are no longer secure, particularly as worldwide evidence demonstrates no clear evidence of a result being changed by AI, we nevertheless must use this moment to act and make our elections resilient to the threats we face.” The report also warned that Russia- and Iran-based actors had been conducting spear-phishing campaigns, or targeted attempts to steal sensitive information via email, against politicians, journalists, activists, and other groups. Furthermore, in March, the NCSC assessed with “high certainty” that a Chinese state-affiliated group conducted online reconnaissance activities, such as collecting information, against the email accounts of UK parliamentarians that had been critical of China’s activities. The government reported that no parliamentary accounts were successfully compromised. In conclusion, while no clear evidence exists of elections being compromised through the use of AI-generated deepfakes, the UK must take threats seriously and act quickly to prevent possible interference in future elections. With AI technology advancing rapidly and increasingly accessible, actors with malicious intent may seek to exploit AI-generated deepfakes to manipulate the public. As such, regulators must work to ensure that the public can have faith in the integrity of the UK’s democratic processes
Share:

Hot News