March 10 2025
9:23 PMNewsletterSubscribeSign inMy AccountSign out
As deepfakes flourish, countries struggle with response
New York Times
last updated:
Jan 24,2023
(Photo: Twitter)
+
-
Deepfake technology — software that
allows people to swap faces, voices, and other characteristics to create
digital forgeries — has been used in recent years to make a synthetic
substitute of Elon Musk that shilled a cryptocurrency scam, to digitally
“undress” more than 100,000 women on Telegram, and to steal millions of dollars
from companies by mimicking their executives’ voices on the phone.اضافة اعلان
In most of the world, authorities cannot do
much about it. Even as the software grows more sophisticated and accessible,
few laws exist to manage its spread.
China hopes to be the exception. This
month, the country adopted expansive rules requiring that manipulated material
have the subject’s consent and bear digital signatures or watermarks, and that
deepfake service providers offer ways to “refute rumors”.
But China faces the same hurdles that have
stymied other efforts to govern deepfakes: The worst abusers of the technology
tend to be the hardest to catch, operating anonymously, adapting quickly, and
sharing their synthetic creations through borderless online platforms. China’s
move has also highlighted another reason that few countries have adopted rules:
Many people worry that the government could use the rules to curtail free
speech.
“The AI scene is an interesting place for global politics, because countries are competing with one another on who’s going to set the tone,”
But simply by forging ahead with its
mandates, tech experts said, Beijing could influence how other governments deal
with the machine learning and artificial intelligence that power deepfake
technology. With limited precedent in the field, lawmakers around the world are
looking for test cases to mimic or reject.
“The AI scene is an interesting place for
global politics, because countries are competing with one another on who’s
going to set the tone,” said Ravit Dotan, a postdoctoral researcher who runs
the Collaborative AI Responsibility Lab at the University of Pittsburgh. “We
know that laws are coming, but we don’t know what they are yet, so there’s a
lot of unpredictability.”
The technology: promise and problemsDeepfakes hold great promise in many
industries. Last year, Dutch police revived a 2003 cold case by creating a
digital avatar of the 13-year-old murder victim and publicizing footage of him
walking through a group of his family and friends in the present day. The
technology is also used for parody and satire, for online shoppers trying on
clothes in virtual fitting rooms, for dynamic museum dioramas, and for actors
hoping to speak multiple languages in international movie releases.
Researchers at the Massachusetts Institute
of Technology Media Lab and UNICEF used similar techniques to study empathy by
transforming images of North American and European cities into the
battle-scarred landscapes caused by the Syrian war.
Some experts predict that as much as 90 percent of online content could be synthetically generated within a few years.
But problematic applications are also
plentiful. Legal experts worry that deepfakes could be misused to erode trust
in surveillance videos, body cameras, and other evidence. (A doctored recording
submitted in a British child custody case in 2019 appeared to show a parent
making violent threats, according to the parent’s lawyer.) Digital forgeries
could discredit or incite violence against police officers, or send them on wild
goose chases. The Department of Homeland Security has also identified risks
including cyberbullying, blackmail, stock manipulation, and political
instability.
Some experts predict that as much as 90
percent of online content could be synthetically generated within a few years.
The increasing volume of deepfakes could
lead to a situation where “citizens no longer have a shared reality, or could
create societal confusion about which information sources are reliable; a
situation sometimes referred to as ‘information apocalypse’ or ‘reality
apathy,’” the European law enforcement agency Europol wrote in a report last
year.
British officials last year cited threats
such as a website that “virtually strips women naked” and that was visited 38
million times in the first eight months of 2021. But there and in the European
Union, proposals to set guardrails for the technology have yet to become law.
Protective measuresAttempts in the US to create a federal task
force to examine deepfake technology have stalled. Representative Yvette D.
Clarke, D-N.Y., proposed a bill in 2019 and again in 2021 — the Defending Each
and Every Person From False Appearances by Keeping Exploitation Subject to
Accountability Act — that has yet to come to a vote. She said she planned to
reintroduce the bill this year.
Clarke said her bill, which would require
deepfakes to bear watermarks or identifying labels, was “a protective measure”.
By contrast, she described the new Chinese rules as “more of a control
mechanism”.
The rules that do exist in the US are
largely aimed at political or pornographic deepfakes. Marc Berman, a Democrat
in California’s state Assembly who represents parts of Silicon Valley and has
sponsored such legislation, said he was unaware of any efforts to enforce his
laws via lawsuits or fines. But he said that, in deference to one of his laws,
a deepfaking app had removed the ability to mimic President Donald Trump before
the 2020 election.
Only a handful of other states, including
New York, restrict deepfake pornography. While running for reelection in 2019,
Houston’s mayor said a critical ad from a fellow candidate broke a Texas law
that bans certain misleading political deepfakes.
“Until we start coming up with ways to better detect deepfakes, it’ll be really hard for any amount of legislation to have any teeth.”
“Half of the value is causing more people
to be a little bit more skeptical about what they’re seeing on a social media
platforms and encourage folks not to take everything at face value,” Berman
said.
Racing a rapidly advancing technologyBut laws or bans may struggle to contain a
technology that is designed to continually adapt and improve. Last year,
researchers from the RAND Corp. demonstrated how difficult deepfakes can be to
identify when they showed a set of videos to more than 3,000 test subjects and
asked them to identify the ones that were manipulated (such as a deepfake of
climate activist Greta Thunberg disavowing the existence of climate change).
The group was wrong more than one-third of
the time. Even a subset of several dozen students studying machine learning at
Carnegie Mellon University were wrong more than 20 percent of the time.
Initiatives from companies such as
Microsoft and Adobe now try to authenticate media and train moderation
technology to recognize the inconsistencies that mark synthetic content. But
they are in a constant struggle to outpace deepfake creators who often discover
new ways to fix defects, remove watermarks, and alter metadata to cover their
tracks.
“There is a technological arms race between
deepfake creators and deepfake detectors,” said Jared Mondschein, a physical
scientist at RAND. “Until we start coming up with ways to better detect
deepfakes, it’ll be really hard for any amount of legislation to have any
teeth.”