Elections in the LLM Era: Persuasion at Scale and the Battle for Ranked Context

Elections in the LLM Era: Persuasion at Scale and the Battle for Ranked Context
Photo by Marek Studzinski / Unsplash

In the span of a few election cycles, political persuasion has shifted from television spots and door knocks to message-tested Facebook ads and microtargeted video. The next phase is already here - and it is conversational.

A series of rigorous new studies led by David Rand, Gordon Pennycook and collaborators shows that large language model (LLM) chatbots can meaningfully change voters' views on candidates and policies after a single short conversation. In some settings, they outperform the persuasion lift of traditional political advertising by a multiple.

At the same time, separate work from the same research teams finds that similar AI systems can reduce belief in conspiracy theories when they engage people in factual, tailored dialogue.

Taken together, these findings confirm what political professionals have intuited since 2024: LLMs are no longer neutral infrastructure. They are active participants in political communication, capable of nudging opinions on high-salience issues in either direction.

For any serious campaign or political organization, this has a clear implication. It is no longer enough to manage polling, media and search. You must understand and shape how you show up inside the models themselves - the new information layer where voters increasingly ask their questions.

Inside The Cornell Research On AI Political Persuasion In The 2024 Elections

The core Cornell study, published in Nature, ran large-scale randomized experiments in three national elections: the 2024 United States presidential race, the 2025 Canadian federal election and the 2025 Polish presidential election.

Participants were asked about their political preferences, then assigned to have a short back-and-forth text dialogue with an AI chatbot that argued for one of the two leading candidates. After the conversation, they reported their preferences again. Because assignment to pro-candidate chatbots was random, differences in attitude shift between conditions can be interpreted causally.

Key details from the United States experiment:

  • More than 2,300 Americans took part about two months before Election Day
  • The chatbot advocated for either Kamala Harris or Donald Trump
  • Attitudes were measured on a 0 to 100 feeling thermometer
  • Trump-leaning voters who spoke with a pro-Harris model moved 3.9 points toward Harris on average
  • Harris-leaning voters who spoke with a pro-Trump model shifted a smaller but still meaningful amount toward Trump

The 3.9-point movement toward Harris among likely Trump voters is roughly four times larger than the average effect of traditional TV ads in high-quality studies of the 2016 and 2020 U.S. presidential campaigns.

Importantly, the chatbot did not need deep psychological tricks. When researchers analyzed the dialogues, two patterns dominated: the models were polite and they provided a steady stream of factual claims favoring their assigned candidate's position. When the researchers instructed an otherwise similar model not to make factual claims, its persuasive power dropped sharply, showing that information density - not just style - was doing the work.

The team also took care with research ethics. Participants were told they were talking to an AI, the direction of persuasion was randomized so that there was no net effect on overall vote choice, and everyone was debriefed afterward. That kind of transparency and randomized design is part of why these results are drawing attention from both academics and practitioners.

Impact Of AI Chatbots On Canadian And Polish Elections

The same Nature paper extended the experimental design to the 2025 Canadian federal and 2025 Polish presidential elections, again asking chatbots to argue for one of the top two contenders.

Here, the effects were larger.

The researchers found that chatbots moved opposition voters' attitudes and stated voting intentions by around 10 percentage points in both countries. That is a "shockingly large" effect in the context of modern presidential politics, where tightly fought races are decided on margins far smaller than 10 points and conventional persuasion is typically very hard.

In all three countries, the biggest shifts came among voters who started out opposed to the candidate the chatbot was supporting - exactly the group campaigns have traditionally found most resistant to persuasion. Opposition voters appear especially exposed when they are willing to enter a structured conversation with an AI that patiently answers questions and presents piece after piece of favorable evidence.

A simplified view of the cross-national results:

Country Election Year Sample Size (Approx.) Average Shift Among Opposition Voters Notes
United States 2024 presidential 2,300+ ~3.9 points toward Harris among Trump leaners About 4x typical TV ad effects
Canada 2025 federal 1,500+ ~10 percentage points toward the chatbot's candidate Larger shifts than in U.S.
Poland 2025 presidential 2,100+ ~10 percentage points toward the chatbot's candidate Similar to Canada

The pattern is clear: in controlled settings, AI chatbots already meet or exceed the persuasive lift that campaigns are used to getting from expensive broadcast media, especially among opposition voters.

How LLMs Influence Voter Opinions Compared To Traditional Ads

For decades, careful field experiments have suggested that campaign persuasion effects are usually small. Shifting a few percentage points in vote intention after repeated exposure to ads is more common than double-digit swings.

Against that backdrop, the Cornell team's finding that a single chatbot conversation can produce a fourfold larger shift than tested TV ads is striking. It reflects two structural differences between LLM dialogues and conventional campaigns:

  1. Interactivity - Instead of passively watching a 30-second spot, the voter can ask follow-ups, push back and get tailored replies.
  2. Information volume - The chatbot can generate many distinct reasons and examples in a few minutes, drawing on a wide underlying knowledge base.

The Science study on "levers of political persuasion" reinforces this second point. Across nearly 77,000 participants in the United Kingdom and 707 political issues, models were most persuasive when they were prompted to "pack their arguments with facts and evidence" and when they received extra post-training specifically to be more persuasive.

The most persuasion-optimized model shifted opposition voters by more than 25 points on a 0 to 100 agreement scale with a political statement - orders of magnitude beyond typical ad effects.

However, there is a catch that matters for anyone concerned with election integrity.

As models become more persuasive, they tend to become less accurate. The same post-training and prompting strategies that encourage information-dense arguments also increase the share of claims that are misleading or false. That trade-off between accuracy and persuasiveness sits at the center of the policy debate.

Can AI Chatbots Reduce Belief In Conspiracy Theories Through Dialogue

The same families of models that can move voters toward a candidate can also be used to counter harmful misinformation.

In another recent Science paper, Costello, Pennycook and Rand asked 2,190 Americans to describe a conspiracy theory they believed, explain why they thought it was true and then engage in a three-round conversation with GPT-4 Turbo. The AI's goal in the treatment condition was explicit: "very effectively persuade" the participant that the conspiracy was not true, by addressing each piece of evidence they raised.

The results were encouraging:

  • Belief in the target conspiracy dropped by about 20 percent on average immediately after the conversation
  • The effect persisted at a two-month follow up
  • Participants also became less conspiratorial in general, not just about the specific theory discussed
  • Intentions to push back against conspiracy content on social media increased

A fact-checker who reviewed the AI's claims found that 99.2 percent were accurate, 0.8 percent were misleading and none were outright false.

These findings show that dialogic AI is not inherently corrosive. When configured with clear objectives, accurate reference material and careful evaluation, it can help people reconsider unfounded beliefs while leaving genuine conspiracies (such as real political scandals) untouched.

The contrast with the electoral persuasion studies is telling. When models are pushed to maximize persuasion without strong accuracy constraints, they tend to stretch the truth. When they are given a correction goal and held to high factual standards, they can act as scalable debunkers.

The Role Of Misinformation And Omission In AI Arguments

Across both the Nature and Science persuasion papers, one theme recurs: factual claims are central to why LLMs are persuasive, yet those claims are not always fully reliable.

In the election experiments, chatbots instructed to advocate for right-leaning candidates made more inaccurate claims than those supporting left-leaning candidates in all three countries studied, mirroring earlier findings that right-leaning social media accounts tend to share more inaccurate information.

Even when individual statements are technically correct, models can persuade by omission - presenting a long list of favorable facts while ignoring significant counterevidence that a human expert would raise for balance.

In the Science "levers" study, the same prompts and post-training strategies that made models more persuasive also increased the rate of misleading or false claims, because the models were pushed to produce ever more "facts" even when high quality evidence was scarce.

For political organizations, this creates a double challenge:

  • Voters are likely to be influenced by LLMs partly because of the dense factual style of their arguments
  • The factual substrate of those arguments depends on whatever material the models have ingested and how they are instructed to frame it

If high quality, accurate content about your candidate or issue is scarce in the training and retrieval pool, the model has less to work with when a voter asks a question. That is where the new information layer comes in.

LLMs As The New Information Layer For Elections

LLMs are being adopted at a pace that makes social media uptake look slow. Traffic to traditional news and search sites is already in decline, while more than half of U.S. adults report using chatbots like ChatGPT to look up information by early 2025.

When a voter now asks, "What has this candidate actually done on health care?" they may no longer type that into a search engine. They will ask an AI assistant on their phone, in their browser or embedded inside another app.

That assistant does three things that matter for campaigns:

  1. Selects sources - It silently chooses a small set of documents and data to read first.
  2. Synthesizes a narrative - It writes an answer that weaves those sources into a coherent explanation, often with a persuasive slant when it is asked to "make the case" for or against something.
  3. Shapes follow up - It proposes next questions and calls to action that direct the rest of the interaction.

Researchers auditing election-related responses from major models in 2024 found that answers shift over time, vary with demographic cues ("I am a woman" or "I am Black") and sometimes reflect implicit assumptions about which issues matter most to different groups.

In other words, there is now an invisible first draft of political communication being written by models that voters treat as trusted assistants. Whether that draft works for or against you depends on what the models see when they reach for context - and that depends on how you are ranked.

Why Campaigns Must Own Their Rankings In LLMs

In classic search, campaigns fought to appear on the first page of results for key issues and for their candidate's name. In the LLM era, the "first page" is the answer itself.

When a model answers a voter's question, it typically draws most heavily from a handful of sources that it considers authoritative and relevant. If your campaign's content is not among those - or if the top-ranked material about you is outdated, low quality or hostile - the model will still give an answer. It just will not be your answer.

The Cornell and Science results sharpen why this matters:

  • Information-dense dialogue is persuasive - Models that provide many factual claims move opposition voters more than those that stay vague.
  • Opposition voters are persuadable - People who start out disliking your candidate still shift meaningfully when they engage in AI conversations.
  • Persuasion can scale - Once models are configured and trained, there is little marginal cost to running thousands or millions of such dialogues.

If a voter's first substantive interaction with your candidate is through an LLM, that interaction will be shaped by how the model ranks, summarizes and argues from available content. Treating that as a black box is no longer tenable.

Owning your rankings in LLMs means:

  • Auditing what major models currently say about your candidate, opponents and priority issues
  • Identifying which external sources they seem to draw on most heavily
  • Creating accurate, well structured content that directly addresses common voter questions and is easy for models to ingest and cite
  • Monitoring changes over time as models update, elections approach and public discourse shifts

Ethical Guardrails And Mitigation Strategies

The research community is moving quickly to anticipate and mitigate misuse. The Cornell studies randomized the direction of persuasion and fully debriefed participants to avoid real-world partisan impact, and they fact-checked arguments with both AI and politically balanced human raters.

Scholars and policy groups are now calling for:

  • Clear rules on using AI for political campaigning, including disclosure when voters are interacting with bots
  • Restrictions on training explicitly persuasion-optimized models for high-stakes political tasks
  • Independent audits of major models' election-related outputs over time
  • Investments in factual, debunking uses of AI like the conspiracy belief interventions, which showed durable benefits without spreading falsehoods

Campaigns and advocacy groups have a role here too. Treating AI persuasion purely as a tactical advantage while ignoring its broader impact on democratic norms is shortsighted. The same infrastructure that helps you make your case today will shape the informational environment facing your supporters and opponents in future cycles.

Where Rankbee Fits In This New Battleground

Rankbee exists because this new information layer is already transforming how voters learn about politics.

Our focus is simple: help political organizations understand and improve how they appear inside major LLMs and AI search experiences.

That means:

  • Mapping what different models currently say about your candidates and causes for the queries that matter most
  • Identifying which sources they rely on and where critical gaps or distortions appear
  • Guiding you on the content, structure and issue coverage that increase the odds models will surface accurate, high quality arguments from your side
  • Tracking shifts across models and over time so you are not surprised when updates change how answers are framed

The academic work summarized above shows both the promise and risk of AI persuasion - from ten point shifts in opposition voting intentions to durable reductions in conspiracy beliefs. The connecting thread is control over information.

Campaigns that treat LLMs as a marginal channel will find their story told for them by systems they never bothered to inspect. Campaigns that treat LLM rankings as a core strategic asset will meet voters where they already are - in AI chats, assistants and generative search - with arguments that are factual, persuasive and aligned with democratic values.

Rankbee is a services and tools partner for any campaign looking to win elections.

Read more