Seeing has not been believing for a very long time.
Photos have been faked and manipulated for nearly as long as photography has
existed.
Now not even reality is required for photographs to look
authentic — just artificial intelligence responding to a prompt. Even experts
sometimes struggle to tell if one is real or not. Can you?
اضافة اعلان
The rapid advent of artificial intelligence has set off
alarms that the technology used to trick people is advancing far faster than
the technology that can identify the tricks. Tech companies, researchers, photo
agencies, and news organizations are scrambling to catch up, trying to
establish standards for content provenance and ownership.
The advancements are already fueling disinformation and
being used to stoke political divisions. Authoritarian governments have created
seemingly realistic news broadcasters to advance their political goals. Last
month, some people fell for images showing Pope Francis donning a puffy Balenciaga
jacket and an earthquake devastating the Pacific Northwest, even though neither
of those events had occurred. The images had been created using Midjourney, a
popular image generator.
The rapid advent of artificial intelligence has set off alarms that the technology used to trick people is advancing far faster than the technology that can identify the tricks.
On Tuesday, as former President Donald Trump turned himself
in at the Manhattan district attorney’s office in New York to face criminal
charges, images generated by artificial intelligence appeared on Reddit showing
actor Bill Murray as president in the White House. Another image showing Trump
marching in front of a large crowd with American flags in the background was
quickly reshared on Twitter without the disclosure that had accompanied the
original post, noting it was not actually a photograph.
How the tools are changing realityExperts fear the technology could hasten an erosion of trust
in media, in government and in society. If any image can be manufactured — and
manipulated — how can we believe anything we see?
A handout image
generated by AI created by Jordan Rhone using Midjourney to highlight the
resilience of conspiracy theories like claims that the moon landings were
staged.
“The tools are going to get better, they’re going to get
cheaper, and there will come a day when nothing you see on the internet can be
believed,” said Wasim Khaled, CEO of Blackbird.AI, a company that helps clients
fight disinformation.
Artificial intelligence allows virtually anyone to create
complex artworks, such as those now on exhibit at the Gagosian art gallery in
New York, or lifelike images that blur the line between what is real and what
is fiction. Plug in a text description, and the technology can produce a
related image — no special skills required.
Often, there are hints that viral images were created by a
computer rather than captured in real life: The luxuriously coated pope had
glasses that seemed to melt into his cheek, and blurry fingers, for example. AI
art tools also often produce nonsensical text.
Rapid advancements in the technology, however, are
eliminating many of those flaws. Midjourney’s latest version, released last
month, is able to depict realistic hands — a feat that had, conspicuously,
eluded early imaging tools.
Trump in an orange jumpsuitDays before Trump turned himself in to face criminal charges
in New York City, images made of his “arrest” coursed around social media. They
were created by Eliot Higgins, a British journalist and founder of Bellingcat,
an open source investigative organization. He used Midjourney to imagine the
former president’s arrest, trial, imprisonment in an orange jumpsuit and escape
through a sewer. He posted the images on Twitter, clearly marking them as
creations. They have since been widely shared.
“The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed.”
The images were not meant to fool anyone. Instead, Higgins
wanted to draw attention to the tool’s power — even in its infancy.
Midjourney’s images, he said, were able to pass muster in
facial-recognition programs that Bellingcat uses to verify identities,
typically of Russians who have committed crimes or other abuses. It is not hard
to imagine governments or other nefarious actors manufacturing images to harass
or discredit their enemies.
The limits of generative images make them relatively easy to
detect by news organizations or others attuned to the risk — at least for now.
Still, stock-photo companies, government regulators, and a
music industry trade group have moved to protect their content from
unauthorized use, but technology’s powerful ability to mimic and adapt is
complicating those efforts.
Stealing stockSome AI image generators have even reproduced images — a
queasy “Twin Peaks” homage; Will Smith eating fistfuls of pasta — with
distorted versions of the watermarks used by companies such as Getty Images or
Shutterstock.
A handout image
generated by AI and provided by Andrés Guadamuz meant to look like a standard
snapshot of four people walking down a street.
In February, Getty accused Stability AI of illegally copying
more than 12 million Getty photos, along with captions and metadata, to train
the software behind its Stable Diffusion tool. In its lawsuit, Getty argued
that Stable Diffusion diluted the value of the Getty watermark by incorporating
it into images that ranged “from the bizarre to the grotesque”.
Getty said the “brazen theft and freeriding” was conducted
“on a staggering scale”. Stability AI did not respond to a request for comment.
Getty’s lawsuit reflects concerns raised by many individual
artists: that AI companies are becoming a competitive threat by copying content
they do not have permission to use.
Trademark violations have also become a concern:
Artificially generated images have replicated NBC’s peacock logo, though with
unintelligible letters, and shown Coca-Cola’s familiar curvy logo with extra
O’s looped into the name.
New competition for photographersThe threat to photographers is fast outpacing the
development of legal protections, said Mickey Osterreicher, general counsel for
the National Press Photographers Association. Newsrooms will increasingly
struggle to authenticate content. Social media users are ignoring labels that
clearly identify images as artificially generated, choosing to believe they are
real photographs, he said.
Generative AI could also make fake videos easier to produce.
A video recently appeared online that seemed to show Nina Schick, an author and
generative AI expert, explaining how the technology was creating “a world where
shadows are mistaken for the real thing”. Schick’s face then glitched as the
camera pulled back, showing a body double in her place.
The video said the deepfake had been created, with Schick’s
consent, by the Dutch company Revel.ai and Truepic, a California company that
is exploring broader digital content verification.
“The scale of this problem is going to accelerate so rapidly that it’s going to drive consumer education very quickly.”
The companies described their video, which features a stamp
identifying it as computer-generated, as the “first digitally transparent
deepfake”. The data is cryptographically sealed into the file; tampering with
the image breaks the digital signature and prevents the credentials from
appearing when using trusted software.
The companies hope the badge, which will come with a fee for
commercial clients, will be adopted by other content creators to help create a
standard of trust involving AI images.
“The scale of this problem is going to accelerate so rapidly
that it’s going to drive consumer education very quickly,” said Jeff McGregor,
CEO of Truepic.
Truepic is part of the Coalition for Content Provenance and
Authenticity, a project set up through an alliance with companies such as
Adobe, Intel, and Microsoft to better trace the origins of digital media.
Chipmaker Nvidia said last month that it was working with Getty to help train
“responsible” AI models using Getty’s licensed content, with royalties paid to
artists.
On the same day, Adobe unveiled its own image-generating
product, Firefly, which will be trained using only images that were licensed or
from its own stock or no longer under copyright. Dana Rao, the company’s chief
trust officer, said on its website that the tool would automatically add
content credentials — “like a nutrition label for imaging” — that identified
how an image had been made. Adobe said it also planned to compensate
contributors.
Read more Technology
Jordan News