30+ tech firms shape AI in elections, while 39% of Americans fear misuse. Inside Resonate's 250M voter database and the $186M campaign evolution.
Is the introduction of AI in election strategy a positive move for democracy?
Over 153 million Americans cast ballots in the 2024 Presidential election, but how many were influenced by AI? Polling by the Pew Research Center leading up to the 2024 election showed that 39% of Americans believed AI in elections would primarily be used for nefarious purposes in the presidential campaign; a mere 5% believed its use would be net positive. (The data also revealed that the concern is essentially bipartisan.) While AI continues transforming sectors, including cybersecurity, supply chain logistics, and sports, AI in politics marks the most significant shift in campaign strategy since social media transformed political outreach. As the use of AI in elections evolves from an experimental tool to a campaign cornerstone, we're witnessing a transformation of democratic processes. AI companies are keen on these possibilities—OpenAI reported disrupting over 20 global influence operations attempting to manipulate elections using AI. The confluence of these factors is a stark truth: campaign strategy, voter influence, and other key factors are being revolutionized by AI.
"Campaigns are using artificial intelligence to predict where voters are, what they care about, and how to reach them. But they're also writing fundraising emails… and at least in one case, making up news stories that aren't true…"
—Peter Loge, Director, George Washington University School of Media and Public Affairs
Yet political campaigns continue to leverage numerous AI applications: voter databases and profiles, sentiment analysis, reaching voters across language barriers, campaign messaging, social media engagement, transcript and content generation, and more. In contrast, a combination of public concern and newly introduced legislation around AI in politics is instigating guardrail attempts to prevent a potential runaway train scenario. This is very new territory indeed.
"Campaigns are using artificial intelligence to predict where voters are, what they care about, and how to reach them. But they're also writing fundraising emails, generating first drafts of scripts, first drafts of speeches, and at least in one case, making up news stories that aren't true and putting them on a campaign website," summarizes Peter Loge, director of the George Washington University School of Media and Public Affairs.
In this new territory, the gold rush of AI in elections is accelerating. Over 30 technology firms are now marketing AI election solutions to U.S. political campaigns, each promising to revolutionize how candidates connect with voters. But beneath the surface-level promises lies a complex ecosystem of companies racing to define what AI in elections will look like and one looming question: who will profit from AI in elections?
The influence of AI in elections reached a flashpoint when Donald Trump made posts on the Truth Social platform featuring AI-generated images showing Swift fans in "Swifties for Trump" shirts and a manipulated "Uncle Sam" poster featuring Swift's face (Swift endorsed Kamala Harris in 2024). The incident gained additional traction after Elon Musk shared it—highlighting a regulatory vacuum where the Federal Elections Commission has declined to vote on proposed AI rules despite 20 states enacting or considering deepfake laws.
“The AI-generated deepfakes of Taylor Swift are yet another example of AI’s power to create misinformation that deceives and defrauds voters,” warned Lisa Gilbert, co-founder of Public Citizen, a nonprofit consumer advocacy organization.
While such high-profile incidents grab headlines, the real revolution in AI in elections is happening in the day-to-day operations of political campaigns.
Implementing AI in elections has found its data-driven champion in Resonate, a company that has built what might be the most comprehensive voter database in history. Their AI-powered platform tracks 250 million voter profiles across 15,000 different attributes, creating an unprecedented ability to micro-target messaging. This isn't your grandfather's polling data— it's real-time voter sentiment analysis at an unimaginable scale. Resonate also claimed another bragging right after the 2024 election: its rAI-powered Prediction technology has now accurately predicted three straight US Presidential elections.
While Resonate focuses on data analytics for AI in elections, other companies are pushing the boundaries of voter interaction, in the US and abroad. Civox, a London-based startup, has introduced an AI agent named "Ashley" that can conduct campaign calls in over 20 languages. According to CEO Ilya Mouzykantskii, this represents the future of AI in elections—the company aims to scale to tens of thousands of daily calls by the end of the year.
The transformation extends beyond voter outreach. BHuman is revolutionizing how AI in elections shapes campaign messaging with AI-generated video personalization, while Quiller, an AI copilot designed specifically for democratic campaigns, has already been adopted by over 100 organizations. The House Majority PAC, traditionally focused on television advertising, has embraced this AI-driven evolution, implementing Amplify.AI for social media engagement as part of their $186 million 2024 campaign strategy.
"There's something of an arms race mentality in this space," notes Samuel Woolley, assistant professor at the University of Texas, part of a group of researchers at the university that investigated generative AI’s use in the 2024 campaign.
The arms race of AI in elections is perhaps best exemplified by the bipartisan adoption of Otter.ai, whose AI-powered transcription and content generation tools are used by Republican and Democratic campaigns. The tool's rapid adoption speaks to a broader trend: campaigns are increasingly willing to experiment with election AI, but they're doing so quietly. Some campaigns have even made their AI technology purchases conditional on keeping them undisclosed to the public.
As AI in elections becomes increasingly prevalent, tech companies are walking a tightrope between innovation and security. OpenAI's proactive stance—monitoring and disrupting influence operations—has set an industry standard for corporate responsibility. Plus, their endorsement of the bipartisan "Protect Elections from Deceptive AI Act" signals a growing recognition that self-regulation may not be enough.
Even established tech giants are being pulled into the deployment of AI in elections. OpenAI has played defense, actively monitoring and disrupting attempts to use ChatGPT for election influence operations. Their intervention against operations from Russia, China, Iran, and other countries highlights the global concern over AI in elections.
State governments aren't waiting for federal action on AI in elections. Fifteen states have already enacted regulations targeting AI in political advertising, with sixteen more considering similar legislation. "We have no guardrails. We have no restrictions other than the ones we put in place," notes Zelly Martin, one of the University of Texas researchers, highlighting the urgent need for comprehensive oversight of AI in elections. The work to establish guardrails for AI also closely mirrors regulatory efforts to oversee major online platforms in general—for example, the bipartisan Digital Consumer Protection Commission Act, introduced in 2023 (but not yet passed).
The rapid scaling of AI in elections presents both opportunities and obstacles. Civox's ambitious plan to scale from thousands to millions of calls demonstrates the potential reach of AI-powered campaign tools, but policies are necessary to keep misuse in check. "AI can be used to make everything faster, and not necessarily in a malicious way," explains Anthony J. DeMattee of The Carter Center. "We know political campaigns typically use AI not to create misleading information or opposition, but quickly generate arguments for their policy positions."
Another key obstacle to adoption is public skepticism about AI in elections remains high. According to 2024 data from the Pew Research Center, only 20% of Americans express confidence in tech companies' ability to prevent election interference through their platforms. This trust deficit varies by age: 35% of adults under 30 believe AI will be used equally for good and bad in elections, while just 20% of those over 65 share this balanced view.
The industry's response to these challenges in AI in elections has been mixed. Some companies, like Resonate, have doubled down on transparency, openly discussing their AI capabilities and data practices. Others operate more discreetly, with some campaigns requiring non-disclosure agreements for AI tool usage. The undisclosed market for AI in elections highlights an industry in transition. Companies are experimenting with various business models, from subscription-based services to per-campaign pricing, while grappling with transparency demands and security concerns.
"AI can be used to make everything faster, and not necessarily in a malicious way. We know political campaigns typically use AI not to create misleading information or opposition, but quickly generate arguments for their policy positions."
—Anthony J. DeMattee, The Carter Center
"In the future, we'll accept that almost all communications from our leaders will be written by AI. We'll accept that they use AI tools to make political and policy decisions. And for planning their campaigns," predicts Bruce Schneier, a fellow at the Berkman-Klein Center for Internet and Society at Harvard University. This shift is underway, with AI in elections moving from novelty to necessity.
Over 64 nations had or will have national elections in 2024; the impact of AI in elections extends far beyond U.S. borders. The technologies being developed and deployed in American campaigns create templates for global political operations, raising stakes for innovation and regulation.
The tech sector's role in preserving election integrity has never been more critical. As AI in elections becomes more sophisticated, companies must balance innovation with responsibility. Cross-platform verification systems and AI detection tools are becoming standard features rather than optional add-ons.
With multiple jurisdictions implementing different rules for AI in elections, companies face complex compliance challenges. The industry's ability to navigate these requirements while maintaining technological progress will shape the future of political campaigning.
The integration of AI in elections represents more than just technological advancement – it's a fundamental shift in how democracy operates in the digital age. As the 2024 election approaches, the actions of these companies will help determine whether AI strengthens or undermines democratic processes.
Many campaigns are quietly experimenting with AI tools, from Resonate's voter analysis to Civox's multilingual outreach and the House Majority PAC's $186 million AI-enhanced strategy. Some organizations require non-disclosure agreements for their AI tools, while others openly embrace the technology's potential.
Only 5% of Americans believe AI will have a net positive effect on the presidential campaign, while 39% expect it to be used for negative purposes. The concern crosses party lines, with both Republicans and Democrats expressing significant worry about AI's influence on the 2024 election.
Twenty states have either enacted or are considering legislation specifically targeting AI-generated political content. Meanwhile, the Federal Elections Commission has declined to vote on proposed AI rules, creating a regulatory vacuum at the national level.
Resonate dominates with its rAI platform, which tracks 250 million voter profiles across 15,000 attributes. Other major players include Civox, which offers multilingual AI calling capabilities; BHuman, which offers video personalization technology; and Quiller, which offers Democratic campaign tools. Otter.ai has achieved rare bipartisan adoption.