Current:Home > StocksPoll shows most US adults think AI will add to election misinformation in 2024 -Wealth Empowerment Zone
Poll shows most US adults think AI will add to election misinformation in 2024
View
Date:2025-04-14 19:46:42
NEW YORK (AP) — The warnings have grown louder and more urgent as 2024 approaches: The rapid advance of artificial intelligence tools threatens to amplify misinformation in next year’s presidential election at a scale never seen before.
Most adults in the U.S. feel the same way, according to a new poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.
The poll found that nearly 6 in 10 adults (58%) think AI tools — which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos in seconds — will increase the spread of false and misleading information during next year’s elections.
By comparison, 6% think AI will decrease the spread of misinformation while one-third say it won’t make much of a difference.
“Look what happened in 2020 — and that was just social media,” said 66-year-old Rosa Rangel of Fort Worth, Texas.
Rangel, a Democrat who said she had seen a lot of “lies” on social media in 2020, said she thinks AI will make things even worse in 2024 — like a pot “brewing over.”
Just 30% of American adults have used AI chatbots or image generators and fewer than half (46%) have heard or read at least some about AI tools. Still, there’s a broad consensus that candidates shouldn’t be using AI.
When asked whether it would be a good or bad thing for 2024 presidential candidates to use AI in certain ways, clear majorities said it would be bad for them to create false or misleading media for political ads (83%), to edit or touch-up photos or videos for political ads (66%), to tailor political ads to individual voters (62%) and to answer voters’ questions via chatbot (56%).
The sentiments are supported by majorities of Republicans and Democrats, who agree it would be a bad thing for the presidential candidates to create false images or videos (85% of Republicans and 90% of Democrats) or to answer voter questions (56% of Republicans and 63% of Democrats).
The bipartisan pessimism toward candidates using AI comes after it already has been deployed in the Republican presidential primary.
In April, the Republican National Committee released an entirely AI-generated ad meant to show the future of the country if President Joe Biden is reelected. It used fake but realistic-looking photos showing boarded-up storefronts, armored military patrols in the streets and waves of immigrants creating panic. The ad disclosed in small lettering that it was generated by AI.
Ron DeSantis, the Republican governor of Florida, also used AI in his campaign for the GOP nomination. He promoted an ad that used AI-generated images to make it look as if former President Donald Trump was hugging Dr. Anthony Fauci, an infectious disease specialist who oversaw the nation’s response to the COVID-19 pandemic.
Never Back Down, a super PAC supporting DeSantis, used an AI voice-cloning tool to imitate Trump’s voice, making it seem like he narrated a social media post.
“I think they should be campaigning on their merits, not their ability to strike fear into the hearts of voters,” said Andie Near, a 42-year-old from Holland, Michigan, who typically votes for Democrats.
She has used AI tools to retouch images in her work at a museum, but she said she thinks politicians using the technology to mislead can “deepen and worsen the effect that even conventional attack ads can cause.”
College student Thomas Besgen, a Republican, also disagrees with campaigns using deepfake sounds or imagery to make it seem as if a candidate said something they never said.
“Morally, that’s wrong,” the 21-year-old from Connecticut said.
Besgen, a mechanical engineering major at the University of Dayton in Ohio, said he is in favor of banning deepfake ads or, if that’s not possible, requiring them to be labeled as AI-generated.
The Federal Election Commission is currently considering a petition urging it to regulate AI-generated deepfakes in political ads ahead of the 2024 election.
While skeptical of AI’s use in politics, Besgen said he is enthusiastic about its potential for the economy and society. He is an active user of AI tools such as ChatGPT to help explain history topics he’s interested in or to brainstorm ideas. He also uses image-generators for fun — for example, to imagine what sports stadiums might look like in 100 years.
He said he typically trusts the information he gets from ChatGPT and will likely use it to learn more about the presidential candidates, something that just 5% of adults say they are likely to do.
The poll found that Americans are more likely to consult the news media (46%), friends and family (29%), and social media (25%) for information about the presidential election than AI chatbots.
“Whatever response it gives me, I would take it with a grain of salt,” Besgen said.
The vast majority of Americans are similarly skeptical toward the information AI chatbots spit out. Just 5% say they are extremely or very confident that the information is factual, while 33% are somewhat confident, according to the survey. Most adults (61%) say they are not very or not at all confident that the information is reliable.
That’s in line with many AI experts’ warnings against using chatbots to retrieve information. The artificial intelligence large language models powering chatbots work by repeatedly selecting the most plausible next word in a sentence, which makes them good at mimicking styles of writing but also prone to making things up.
Adults associated with both major political parties are generally open to regulations on AI. They responded more positively than negatively toward various ways to ban or label AI-generated content that could be imposed by tech companies, the federal government, social media companies or the news media.
About two-thirds favor the government banning AI-generated content that contains false or misleading images from political ads, while a similar number want technology companies to label all AI-generated content made on their platforms.
Biden set in motion some federal guidelines for AI on Monday when he signed an executive order to guide the development of the rapidly progressing technology. The order requires the industry to develop safety and security standards and directs the Commerce Department to issue guidance to label and watermark AI-generated content.
Americans largely see preventing AI-generated false or misleading information during the 2024 presidential elections as a shared responsibility. About 6 in 10 (63%) say a lot of the responsibility falls on the technology companies that create AI tools, but about half give a lot of that duty to the news media (53%), social media companies (52%), and the federal government (49%).
Democrats are somewhat more likely than Republicans to say social media companies have a lot of responsibility, but generally agree on the level of responsibility for technology companies, the news media and the federal government.
____
The poll of 1,017 adults was conducted Oct. 19-23, 2023, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, designed to represent the U.S. population. The margin of sampling error for all respondents is plus or minus 4.1 percentage points.
____
O’Brien reported from Providence, Rhode Island. Associated Press writer Linley Sanders in Washington, D.C., contributed to this report.
____
The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.
veryGood! (3153)
Related
- Who are the most valuable sports franchises? Forbes releases new list of top 50 teams
- 5-year-old girl dies after being struck by starting gate at harness race
- Play it again, Joe. Biden bets that repeating himself is smart politics
- Judge rejects attempt to temporarily block Connecticut’s landmark gun law passed after Sandy Hook
- Most popular books of the week: See what topped USA TODAY's bestselling books list
- US economy likely generated 200,000 new jobs in July, showing more resilience in face of rate hikes
- Loved 'Oppenheimer?' This film tells the shocking true story of a Soviet spy at Los Alamos
- Are time limits at restaurants a reasonable new trend or inhospitable experience? | Column
- Trump suggestion that Egypt, Jordan absorb Palestinians from Gaza draws rejections, confusion
- No AP Psychology credit for Florida students after clash over teaching about gender
Ranking
- Kylie Jenner Shows Off Sweet Notes From Nieces Dream Kardashian & Chicago West
- Game maker mashes up Monopoly and Scrabble for 'addicting' new challenge: What to know
- Actor Mark Margolis, murderous drug kingpin on ‘Breaking Bad’ and ‘Better Call Saul,’ dies at 83
- Unorthodox fugitive who escaped Colorado prison 5 years ago is captured in Florida, officials say
- Apple iOS 18.2: What to know about top features, including Genmoji, AI updates
- Actor Mark Margolis, murderous drug kingpin on ‘Breaking Bad’ and ‘Better Call Saul,’ dies at 83
- New Jersey house explosion leaves 2 dead, 2 missing, 2 children injured
- Want tickets to Taylor Swift's new tour dates? These tips will help you score seats
Recommendation
Who's hosting 'Saturday Night Live' tonight? Musical guest, how to watch Dec. 14 episode
Hearts, brains and bones: Stolen body parts scandal stretches from Harvard to Kentucky
Jailed Russian opposition leader Alexey Navalny braces for verdict in latest trial
Eric B. & Rakim change the flow of rap with 'Paid in Full'
Hackers hit Rhode Island benefits system in major cyberattack. Personal data could be released soon
Fifth Gilgo Beach victim identified as Karen Vergata, police say
Oregon crabbers and environmentalists are at odds as a commission votes on rules to protect whales
Trump drops motion seeking removal of Georgia DA probing efforts to overturn election