Press "Enter" to skip to content

AI’s Impact on Elections: Deepfakes Erode Trust and Democratic Integrity


View the entire AI and Elections collection



As the 2024 election season unfolds, concerns about the potential for artificial intelligence to disrupt the process have gained significant attention. While the most catastrophic predictions like AI-driven disinformation campaigns and direct attacks on election systems have not materialized, the emergence of AI-generated deepfake content is already affecting the political landscape.

The ability of AI to create deceptive videos, images, and audio is gradually undermining public trust by blurring the lines between reality and fiction, increasing political division, and weakening confidence in democratic systems. To tackle the threats posed by AI, it’s essential to examine both the immediate effects on U.S. elections and the broader implications over time.

Significant incidents have drawn attention, such as robocalls in New Hampshire where an AI-generated voice imitating President Biden discouraged voter turnout, and misinformation spread by social media platforms like X’s chatbot Grok. Notably, Russian operatives produced deepfakes targeting Vice President Kamala Harris, including a video falsely depicting her making inflammatory comments, which was shared by Elon Musk on X. Another case involved a former Palm Beach County deputy sheriff, now in Russia, who helped create fake videos, including one falsely accusing Minnesota Gov. Tim Walz of assault.

Globally, elections face similar challenges. In India’s 2024 elections, deepfakes of celebrities criticizing Prime Minister Narendra Modi circulated widely on WhatsApp and YouTube. During Brazil’s 2022 presidential race, deepfakes and bots spread misleading narratives on WhatsApp. While the direct impact on election outcomes remains unquantified, these examples demonstrate AI’s expanding role in shaping political dialogue, potentially eroding trust and influencing voter perceptions.

AI-driven disinformation’s long-term effects extend beyond trust erosion, creating an environment where truth is contested. As deepfakes become more sophisticated, they enable bad actors to dismiss genuine evidence as fake, complicating public discourse. This dynamic, sometimes referred to as the liar’s dividend, allows influential figures to evade accountability, ultimately weakening democratic institutions and increasing societal vulnerability to manipulation.

The growing challenges highlight the need for increased transparency and accountability. Social media platforms and AI developers must take steps to identify AI-generated content. Techniques like watermarking can help voters differentiate between genuine and manipulated media. Furthermore, platforms should invest in trust and safety teams, many of which have been significantly downsized.

The problem extends beyond visible platforms to encrypted services like WhatsApp and Telegram, which are increasingly used for news, adding complexity due to their design limiting oversight. This rapid spread of AI-generated disinformation is reminiscent of past lessons, such as the 2016 U.S. presidential election, where foreign interference became evident only years later.

The core question remains: How can we protect democratic integrity amid rapid technological advancements? Securing elections requires a multifaceted strategy, including legislative measures for transparency, voter education, and collaboration among tech companies, policymakers, and civil organizations. Proactive efforts are essential to address systemic vulnerabilities enabling AI-driven interference.

One potential solution is establishing ethical guidelines for AI developers, similar to standards in healthcare and finance that prioritize safety and fraud prevention. Such frameworks, despite imperfections, offer a basis for accountability and risk management. Ethical guidelines for AI could require clear labeling of AI-generated content to enhance transparency. Platforms hosting deepfakes should adhere to these standards through regulation.

AI-driven disinformation, from deepfakes targeting officials to voter manipulation campaigns, exposes critical weaknesses in democratic systems. Addressing these threats demands more than reactive measures; it requires a coordinated and urgent response. Social media platforms, AI developers, and policymakers must implement transparency measures, reinforce trust and safety protocols, and establish accountability for AI-generated content. Without decisive action, AI-driven deception risks becoming a permanent feature of political campaigns, threatening the very foundation of democratic governance. The integrity of elections hinges on confronting this challenge before it becomes the norm.