Voters Increasingly Use AI as Political Advisor. A New Study Shows the Risks.
Voters Increasingly Use AI as Political Advisor. A New Study Shows the Risks.
In an experiment during Japan’s February 2026 Lower House election, policy stances dominated AI chatbots’ voting guidance, and left-leaning stances caused five AI models to recommend the Japanese Communist Party. The results are driven by which sources models can access and have significant implications for democratic systems as they grapple with the future of elections in the AI era.
As people increasingly turn to large language models for political tasks, including voting guidance, the political neutrality of AI chatbots has emerged as a major policy concern. American AI chatbots are used globally, yet little is known about their behavior as tools for political decision-making and potential political bias in non-U.S. contexts.
To address this gap, researchers ran an experiment during the final week of Japan’s February 8, 2026, general election. The experiment reveals a striking pattern: when asked which party to support in the election, five major AI models from three companies overwhelmingly directed voter profiles with left-leaning policy positions toward the Japanese Communist Party (JCP). The reason, according to the researchers, has to do with the information environment AI systems can access.
These findings, published in a working paper titled Why Do AI Models Tell Left-Wing Voters to Support the Communist Party?, “suggest that AI voting advice may be shaped as much by the information-retrieval environment as by model training, with implications for governance frameworks that rely on U.S.-centric assumptions,” write the researchers, Andrew Hall, the Davies Family Professor of Political Economy at the Stanford Graduate School of Business and a senior fellow at the Hoover Institution, and Sho Miyazaki, a visiting researcher at Waseda Institute of Political Economy, an incoming Ph.D. student in public policy at Harvard University, and a former predoctoral research fellow at Stanford University. Miyazaki is also a core member of the Stanford Japan Barometer, a project of the Japan Program at Stanford’s Shorenstein Asia-Pacific Research Center.
Sign up for APARC newsletters to receive our scholars' research updates >
How AI Models Deliver Political Advice in Japan: A Systematic Experiment
To understand how AI models provide political recommendations in the Japanese context, Hall and Miyazaki created 36,300 synthetic voter profiles with varying gender, region, and stated political views on 12 policy issues spanning security (constitutional amendment, defense spending, espionage law), diplomacy and immigration (China relations, foreign workers, permanent residency), energy (nuclear power), economic (consumption tax, social insurance), and social domains (dual surnames, restrictions on corporate donations, Diet seat reduction).
They then queried five models from three AI companies (OpenAI, Google, and xAI) during Japan’s February 8, 2026, Lower House election, asking each model to recommend a political party based on the voter profiles. All five models were queried with web search enabled and could access current information.
The researchers found that policy positions overwhelmingly dominated the models' party recommendations, producing swings of 50 to 98 percentage points in party choice, compared to just 0.5 to 7 percentage points for demographic factors. Thus, demographic effects are an order of magnitude smaller than policy effects.
Furthermore, left-leaning policy views in voter profiles caused all five AI models to converge overwhelmingly on recommending the Japan Communist Party, even though other parties hold broadly similar positions on the issues tested. The concentration on recommending JCP under left-leaning policy stances is therefore not explained by ideological distinctiveness.
In the control condition without policy input, models showed no uniform left-wing bias: three of the five models recommended the Liberal Democratic Party at high rates, and JCP shares were low for four of the five models.
“The key finding is that JCP recommendation rates rise sharply when policy positions are provided, which is the typical scenario when voters use these tools in practice,” write Hall and Miyazaki.
Information Environment Asymmetry
Why the JCP? The researchers traced the pattern to the sources AI models cite when making recommendations.
The JCP operates Akahata, a self-described daily newspaper published on a fully open website that AI web-search tools can freely access. In contrast, Japan's major news outlets have implemented technical barriers (known as robots.txt restrictions) that block AI crawlers from accessing their content, a move driven by copyright concerns.
The researchers found that the JCP's open website and party newspaper were among the most-cited sources in the AI models’ recommendations. Unable to distinguish between editorially independent journalism and partisan content, the models treated the JCP content as a credible news source. Thus, the information environment available to AI is systematically skewed toward the JCP's partisan sources that are designed to persuade rather than to scrutinize and inform.
“A model that retrieves information from jcp.or.jp/akahata and simultaneously classifies that site as news media is not simply making a labeling error: it is operating in an information environment where the boundary between party communication and journalism is genuinely blurred, and where the consequences of that blurring flow directly into its recommendations,” Hall and Miyazaki write.
The researchers also found that incorporating X search amplified left-leaning recommendations in Japan, the opposite of expectations based on the U.S. discourse environment.
Implications for Democratic Systems in the AI Age
The study's findings carry significant implications:
- AI governance frameworks should treat content access policy and AI political neutrality as deeply intertwined domains.
- Election commissions should create nonpartisan platforms that compile structured data about party positions so that the information is comparable, party-independent, and machine-readable.
- News organizations should recognize that by imposing copyright-motivated content access restrictions, they may inadvertently cede influence over AI-mediated information to partisan actors. They may wish to consider forms of negotiated access.
- Political actors will likely begin to optimize their communication for AI.
- Users should exercise caution in using AI as a voting advisor and be conscious of its potential biases and blind spots.
“If AI systems are going to act as political intermediaries more broadly, two problems need to be addressed,” writes Hall in an article about the research via his Substack. “The first is informational: ensuring that what the sources models read reflects the same balance of scrutiny and debate that voters encounter in a healthy media ecosystem. The second is advisory: deciding how an AI system should even translate a voter’s values into political guidance in the first place.”
Learn more about the Stanford Japan Barometer and its work >