Singapore EN

Election Integrity in the AI Era: Challenges and Innovations

Ryan Cox

Co-head of AI , Synechron

Artificial Intelligence

This November is set to be one of the most influential elections in the US to date. Not only because of the divergent platforms of the candidates but also due to it being the first U.S. election since the widespread introduction of generative AI.

The growing adoption of this technology means it will become further ingrained in our electoral system. As with any new technology, there is an immense amount of that is unknown which means new risks and dangers, especially with an electorate that is still learning how to identify fake content. Earlier this year there was a deepfake robocall falsely attributed to Joe Biden and a poll from this past May found that 55 percent of voters were worried AI would undermine the election.

While the technology has ushered in advancements across industries, this election cycle will provide a more comprehensive evaluation of its impact. We will discover how AI will reshape strategies used to sway voters and what its ability to both innovate and mislead means for the integrity of democracy.

Perhaps the most prevalent topic has been deepfakes – digital forgeries that use AI.

With a few minutes, it’s now possible to have any major political figure say whatever you want. Completely fake photos or video footage can be added to enhance the illusion, creating audio and video clips disturbingly indistinguishable from genuine media.

During election season, the likelihood of these being used to spread misinformation is a pressing concern. In the US , we have seen state governments like Florida and Colorado attempt to curtail the rapidly growing use of deepfake videos, passing bills requiring campaigns to disclose the use of deepfakes in political ads.

Last October in the UK, a deepfake clip of Labour leader Kier Starmer swearing at staffers went viral. Even though MPs from all parties condemned it, X refused to remove the clip.

This could be just a taste of what we will see in the US if we fail to properly identify and regulate this content.

The threat posed by deepfakes is multifaceted. They can undermine public trust in media, making it difficult for voters to separate truth from fiction. This erosion of trust can lead to widespread apathy, which, in turn, threatens the democratic process. Manipulating candidate images and statements can skew voter perceptions and influence election outcomes.

It is useful to examine what's been happening abroad to see what might be ahead in our election.

In India this past spring, the world’s biggest election was awash in digital holograms, AI-generated images and personalized phone calls in dozens of languages. Deepfakes of Bollywood actors criticizing Prime Minister Narendra Modi were distributed online, leading to police involvement and multiple arrests. The prime minister himself commented, saying the clips were generated “to create tension in society.”

India’s Election Commission issued an advisory urging political parties to refrain from employing these tactics. Yet with these tools so easy to access, it is hard to see them being controlled without comprehensive legislation and a thorough enforcement mechanism.

Problems have not been limited to one region – every corner of the world has been infiltrated by these technologies.

In Slovakia, a liberal candidate was falsely depicted in a video discussing plans to increase alcohol taxes and manipulate election results. In Nigeria, a fake video clip of a candidate claiming he would rig ballots circulated widely.

Media vigilance and the proactive role of fact-checkers are critical.

Both companies and governments have taken action to preserve the integrity of elections.

OpenAI launched a ‘disinformation detector,’ with a 98.8 percent success rate in identifying fake images generated by AI, including those from its own DALL-E 3 model. At the Munich Security Conference, 20 major companies signed a new Technology Accord to combat the deceptive use of AI in the 2024 elections.

These initiatives are non-partisan. They don’t seek to curb free expression. Rather they seek to uphold the integrity of information and preventing AI technologies from being used to manipulate electoral processes.

We need a coalition, from AI researchers, operators of customer-facing platforms and technology companies, to work together collaboratively towards solutions.

The positive role AI can play in elections

While we’ve discussed concerns around AI, there’s substantial potential for new technology to introduce more transparency in the election process and create a more engaged electorate.

Consider the following possible use cases:

  • Enhanced voter registration accuracy: AI could drastically improve the accuracy and efficiency of voter registration by automating data verification and detecting anomalies –minimizing the risk of fraud and up to date voter rolls.
  • Real-time fraud detection: During elections, AI could monitor and analyze voting patterns to allow for immediate responses and interventions, alerting election integrity officials about unexpected behavior.
  • Optimized resource allocation: Logistical challenges can affect voter turnout. AI could optimize the allocation of voting machines, electoral staff, and security personnel by analyzing past data and current conditions.

We shouldn’t approach our first GenAI election with fear. Rather we need to be both vigilant in the face of possible bad behavior and open to useful applications.

There’s an adage: “Politics is dirty.” But AI offers a chance for a cleaner political system if it is not abused by the wrongdoers. Hopefully, as legislation around the use of AI comes into force, we will see the benefits of this technology realized without compromising democratic governance.

The Author

Rachel Anderson, Digital Lead at Synechron UK
Ryan Cox

Co-head of AI at Synechron

Ryan Cox is a Senior Director and Synechron’s Co-Head of Artificial Intelligence. We partner with companies to explore the potential of AI technology to revolutionize their business. Synechron's AI practice specialises in large language models, generative AI technologies, AI strategy and architecture, and AI research and development. We ensure AI systems and solutions deployed at our clients' sites are ethical, safe and secure. Contact Ryan on LinkedIn or via email

See More Relevant Articles