Current:Home > FinanceDeepfake of principal’s voice is the latest case of AI being used for harm -Wealth Empowerment Zone
Deepfake of principal’s voice is the latest case of AI being used for harm
View
Date:2025-04-17 05:46:34
The most recent criminal case involving artificial intelligence emerged last week from a Maryland high school, where police say a principal was framed as racist by a fake recording of his voice.
The case is yet another reason why everyone — not just politicians and celebrities — should be concerned about this increasingly powerful deep-fake technology, experts say.
“Everybody is vulnerable to attack, and anyone can do the attacking,” said Hany Farid, a professor at the University of California, Berkeley, who focuses on digital forensics and misinformation.
Here’s what to know about some of the latest uses of AI to cause harm:
AI HAS BECOME VERY ACCESSIBLE
Manipulating recorded sounds and images isn’t new. But the ease with which someone can alter information is a recent phenomenon. So is the ability for it to spread quickly on social media.
The fake audio clip that impersonated the principal is an example of a subset of artificial intelligence known as generative AI. It can create hyper-realistic new images, videos and audio clips. It’s cheaper and easier to use in recent years, lowering the barrier to anyone with an internet connection.
“Particularly over the last year, anybody — and I really mean anybody — can go to an online service,” said Farid, the Berkeley professor. “And either for free or for a few bucks a month, they can upload 30 seconds of someone’s voice.”
Those seconds can come from a voicemail, social media post or surreptitious recording, Farid said. Machine learning algorithms capture what a person sounds like. And the cloned speech is then generated from words typed on a keyboard.
The technology will only get more powerful and easier to use, including for video manipulation, he said.
WHAT HAPPENED IN MARYLAND?
Authorities in Baltimore County said Dazhon Darien, the athletic director at Pikesville High, cloned Principal Eric Eiswert’s voice.
The fake recording contained racist and antisemitic comments, police said. The sound file appeared in an email in some teachers’ inboxes before spreading on social media.
The recording surfaced after Eiswert raised concerns about Darien’s work performance and alleged misuse of school funds, police said.
The bogus audio forced Eiswert to go on leave, while police guarded his house, authorities said. Angry phone calls inundated the school, while hate-filled messages accumulated on social media.
Detectives asked outside experts to analyze the recording. One said it “contained traces of AI-generated content with human editing after the fact,” court records stated.
A second opinion from Farid, the Berkeley professor, found that “multiple recordings were spliced together,” according to the records.
Farid told The Associated Press that questions remain about exactly how that recording was created, and he has not confirmed that it was fully AI-generated.
But given AI’s growing capabilities, Farid said the Maryland case still serves as a “canary in the coal mine,” about the need to better regulate this technology.
WHY IS AUDIO SO CONCERNING?
Many cases of AI-generated disinformation have been audio.
That’s partly because the technology has improved so quickly. Human ears also can’t always identify telltale signs of manipulation, while discrepancies in videos and images are easier to spot.
Some people have cloned the voices of purportedly kidnapped children over the phone to get ransom money from parents, experts say. Another pretended to be the chief executive of a company who urgently needed funds.
During this year’s New Hampshire primary, AI-generated robocalls impersonated President Joe Biden’s voice and tried to dissuade Democratic voters from voting. Experts warn of a surge in AI-generated disinformation targeting elections this year.
But disturbing trends go beyond audio, such as programs that create fake nude images of clothed people without their consent, including minors, experts warn. Singer Taylor Swift was recently targeted.
WHAT CAN BE DONE?
Most providers of AI voice-generating technology say they prohibit harmful usage of their tools. But self enforcement varies.
Some vendors require a kind of voice signature, or they ask users to recite a unique set of sentences before a voice can be cloned.
Bigger tech companies, such as Facebook parent Meta and ChatGPT-maker OpenAI, only allow a small group of trusted users to experiment with the technology because of the risks of abuse.
Farid said more needs to be done. For instance, all companies should require users to submit phone numbers and credit cards so they can trace back files to those who misuse the technology.
Another idea is requiring recordings and images to carry a digital watermark.
“You modify the audio in ways that are imperceptible to the human auditory system, but in a way that can be identified by a piece of software downstream,” Farid said.
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, said the most effective intervention is law enforcement action against criminal use of AI. More consumer education also is needed.
Another focus should be urging responsible conduct among AI companies and social media platforms. But it’s not as simple as banning Generative AI.
“It can be complicated to add legal liability because, in so many instances, there might be positive or affirming uses of the technology,” Givens said, citing translation and book-reading programs.
Yet another challenge is finding international agreement on ethics and guidelines, said Christian Mattmann, director of the Information Retrieval & Data Science group at the University of Southern California.
“People use AI differently depending on what country they’re in,” Mattmann said. “And it’s not just the governments, it’s the people. So culture matters.”
___
Associated Press reporters Ali Swenson and Matt O’Brien contributed to this article.
veryGood! (8449)
Related
- Why Sean "Diddy" Combs Is Being Given a Laptop in Jail Amid Witness Intimidation Fears
- Amanda Gorman addresses book bans in 1st interview since poem was restricted in a Florida school
- How Dannielynn Birkhead Honored Mom Anna Nicole Smith With 2023 Kentucky Derby Style
- What's it take to go from mechanic to physician at 51? Patience, an Ohio doctor says
- Rolling Loud 2024: Lineup, how to stream the world's largest hip hop music festival
- Kamala Harris on Climate Change: Where the Candidate Stands
- Queen Charlotte: A Bridgerton Story’s Arsema Thomas Teases Her Favorite “Graphic” Scene
- Today’s Climate: June 30, 2010
- South Korea's acting president moves to reassure allies, calm markets after Yoon impeachment
- Today’s Climate: July 1, 2010
Ranking
- Meta releases AI model to enhance Metaverse experience
- One of Kenya's luckier farmers tells why so many farmers there are out of luck
- Sister of Saudi aid worker jailed over Twitter account speaks out as Saudi cultural investment expands with PGA Tour merger
- Ray Liotta's Cause of Death Revealed
- The FTC says 'gamified' online job scams by WhatsApp and text on the rise. What to know.
- Are We Ready for Another COVID Surge?
- Personalities don't usually change quickly but they may have during the pandemic
- Scripps Howard Awards Recognizes InsideClimate News for National Reporting on a Divided America
Recommendation
Off the Grid: Sally breaks down USA TODAY's daily crossword puzzle, Triathlon
In Iowa, Candidates Are Talking About Farming’s Climate Change Connections Like No Previous Election
Inside King Charles and Queen Camilla's Epic Love Story: From Other Woman to Queen
Scripps Howard Awards Recognizes InsideClimate News for National Reporting on a Divided America
Will the 'Yellowstone' finale be the last episode? What we know about Season 6, spinoffs
ALS drug's approval draws cheers from patients, questions from skeptics
How does air quality affect our health? Doctors explain the potential impacts
Today’s Climate: July 8, 2010