Your Daily Guide to Cyber Security, Cloud, and Network Strategies

New AI-powered tools produce inaccurate election information more than half the time, including answers that are harmful or incomplete, according to new research. 

The study, from AI Democracy Projects and nonprofit media outlet Proof News, comes as the U.S. presidential primaries are underway across the U.S. and as more Americans are turning to chatbots such as Google’s Gemini and OpenAI’s GPT-4 for information. Experts have raised concerns that the advent of powerful new forms of AI could result in voters receiving false and misleading information, or even discourage people from going to the polls.

The latest generation of artificial intelligence technology, including tools that let users almost instantly generate textual content, videos and audio, has been heralded as ushering in a new era of information by providing facts and analysis faster than a human can. Yet the new study found that these AI models are prone to suggesting voters head to polling places that don’t exist or inventing illogical responses based on rehashed, dated information. 

For instance, one AI model, Meta’s Llama 2, responded to a prompt by erroneously answering that California voters can vote by text message, the researchers found — voting by text isn’t legal anywhere in the U.S. And none of the five AI models that were tested — OpenAI’s ChatGPT-4, Meta’s Llama 2, Google’s Gemini, Anthropic’s Claude, and Mixtral from the French company Mistral — correctly stated that wearing clothing with campaign logos, such as a MAGA hat, is barred at Texas polls under that state’s laws.

Some policy experts believe that AI could help improve elections, such as by powering tabulators that can scan ballots more quickly than poll workers or by detecting anomalies in voting, according to the Brookings Institution. Yet such tools are already being misused, such as by enabling bad actors, including governments, to manipulate voters in ways that weaken democratic processes.

For instance, AI-generated robocalls were sent to voters days before the New Hampshire presidential primary last month, with a fake version of President Joe Biden’s voice urging people not to vote in the election.

Talking Points: How concerned should we be over artificial intelligence?


Meanwhile, some people using AI are encountering other problems. Google recently paused its Gemini AI picture generator, which it plans to relaunch in the next few weeks, after the technology produced info with historical inaccuracies and other concerning responses. For example, when asked to create an image of a German soldier during World War 2, when the Nazi party controlled the nation, Gemini appeared to provide racially diverse images, according to the Wall Street Journal.

“They say they put their models through extensive safety and ethics testing,” Maria Curi, a tech policy reporter for Axios, told CBS News. “We don’t know exactly what those testing processes are. Users are finding historical inaccuracies, so it begs the question whether these models are being let out into the world too soon.”

AI models and hallucinations

Meta spokesman Daniel Roberts told the Associated Press that the latest findings are “meaningless” because they don’t precisely mirror the way people interact with chatbots. Anthropic said it plans to roll out a new version of its AI tool in the coming weeks to provide accurate voting information. 

In an email to CBS MoneyWatch, Meta pointed out that Llama 2 is a model for developers — it isn’t the tool that consumers would use. 

“When we submitted the same prompts to Meta AI – the product the public would use – the majority of responses directed users to resources for finding authoritative information from state election authorities, which is exactly how our system is designed,” a Meta spokesperson said.

“[L]arge language models can sometimes ‘hallucinate’ incorrect information,” said Alex Sanderford, Anthropic’s Trust and Safety Lead, told the AP.

OpenAI said it plans to “keep evolving our approach as we learn more about how our tools are used,” but offered no specifics. Google and Mistral did not immediately respond to requests for comment.

“It scared me”

In Nevada, where same-day voter registration has been allowed since 2019, four of the five chatbots tested by researchers wrongly asserted that voters would be blocked from registering weeks before Election Day.

“It scared me, more than anything, because the information provided was wrong,” said Nevada Secretary of State Francisco Aguilar, a Democrat who participated in last month’s testing workshop.

Most adults in the U.S. fear that AI tools will increase the spread of false and misleading information during this year’s elections, according to a recent poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.

Yet in the U.S., Congress has yet to pass laws regulating AI in politics. For now, that leaves the tech companies behind the chatbots to govern themselves.

—With reporting by the Associated Press.

Share this article
Shareable URL
Prev Post
Next Post
Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
At Sansan Chicken in Long Island City, Queens, the cashier beamed a wide smile and recommended the fried chicken…
It’s been a blistering start to the year for the stock market. The S&P 500, one of the most widely watched…
More than 70% of Americans say a rewarding career or job is extremely important to live a fulfilling life — more…
Vertex Pharmaceuticals of Boston announced Tuesday that it had developed an experimental drug that relieves…