For
years, activists and academics have been raising concerns that facial analysis
software that claims to be able to identify a person’s age, gender, and
emotional state can be biased, unreliable or invasive — and should not be sold.
اضافة اعلان
Acknowledging some
of those criticisms,
Microsoft said June 21 that it planned to remove those
features from its artificial intelligence service for detecting, analyzing, and
recognizing faces. They will stop being available to new users this week and
will be phased out for existing users within the year.
The changes are
part of a push by Microsoft for tighter controls of its artificial intelligence
products. After a two-year review, a team at Microsoft has developed a
“Responsible AI Standard,” a 27-page document that sets out requirements for AI
systems to ensure they are not going to have a harmful impact on society.
The requirements
include ensuring that systems provide “valid solutions for the problems they
are designed to solve” and “a similar quality of service for identified
demographic groups, including marginalized groups.”
The visitor center at Microsoft headquarters in Redmond, Washington, June 20, 2022.
Before they are
released, technologies that would be used to make important decisions about a
person’s access to employment, education, health care, financial services, or a
life opportunity are subject to a review by a team led by Natasha Crampton,
Microsoft’s chief responsible AI officer.
There were
heightened concerns at Microsoft around the emotion recognition tool, which
labeled someone’s expression as anger, contempt, disgust, fear, happiness,
neutral, sadness, or surprise.
“There’s a huge
amount of cultural and geographic and individual variation in the way in which
we express ourselves,” Crampton said. That led to reliability concerns, along
with the bigger questions of whether “facial expression is a reliable indicator
of your internal emotional state,” she said.
The age and gender
analysis tools being eliminated — along with other tools to detect facial
attributes such as hair and smile — could be useful to interpret visual images
for blind or low-vision people, for example, but the company decided it was
problematic to make the profiling tools generally available to the public,
Crampton said.
In particular, she
added, the system’s so-called gender classifier was binary, “and that’s not
consistent with our values.”
Microsoft will
also put new controls on its face recognition feature, which can be used to
perform identity checks or search for a particular person. Uber, for example,
uses the software in its app to verify that a driver’s face matches the ID on
file for that driver’s account.
Software developers who want to use Microsoft’s
facial recognition tool will need to apply for access and explain how they plan
to deploy it.
Users will also be
required to apply and explain how they will use other potentially abusive AI
systems, such as Custom Neural Voice. The service can generate a human voice
print, based on a sample of someone’s speech, so that authors, for example, can
create synthetic versions of their voice to read their audiobooks in languages
they do not speak.
Because of the
possible misuse of the tool — to create the impression that people have said
things they have not — speakers must go through a series of steps to confirm that
the use of their voice is authorized, and the recordings include watermarks
detectable by Microsoft.
Microsoft will also put new controls on its face recognition feature, which can be used to perform identity checks or search for a particular person.
“We’re taking concrete steps to live up to our
AI principles,” said Crampton, who has worked as a lawyer at Microsoft for 11
years and joined the ethical AI group in 2018. “It’s going to be a huge
journey.”
Microsoft, like
other technology companies, has had stumbles with its artificially intelligent
products. In 2016, it released a chatbot on Twitter, called Tay, that was
designed to learn “conversational understanding” from the users it interacted
with. The bot quickly began spouting racist and offensive tweets, and Microsoft
had to take it down.
In 2020,
researchers discovered that speech-to-text tools developed by Microsoft, Apple,
Google, IBM, and Amazon worked less well for
Black people. Microsoft’s system
was the best of the bunch but misidentified 15 percent of words for white
people, compared with 27 percent for Black people.
The company had
collected diverse speech data to train its AI system but had not understood
just how diverse language could be. So it hired a sociolinguistics expert from
the
University of Washington to explain the language varieties that Microsoft
needed to know about. It went beyond demographics and regional variety into how
people speak in formal and informal settings.
“Thinking about
race as a determining factor of how someone speaks is actually a bit
misleading,” Crampton said. “What we’ve learned in consultation with the expert
is that actually a huge range of factors affect linguistic variety.”
Crampton said the
journey to fix that speech-to-text disparity had helped inform the guidance set
out in the company’s new standards.
“This is a critical
norm-setting period for AI,” she said, pointing to Europe’s proposed regulations
setting rules and limits on the use of artificial intelligence. “We hope to be
able to use our standard to try and contribute to the bright, necessary
discussion that needs to be had about the standards that technology companies
should be held to.”
A vibrant debate
about the potential harms of
AI has been underway for years in the technology
community, fueled by mistakes and errors that have real consequences on
people’s lives, such as algorithms that determine whether or not people get
welfare benefits. Dutch tax authorities mistakenly took child care benefits
away from needy families when a flawed algorithm penalized people with dual
nationality.
Automated software
for recognizing and analyzing faces has been particularly controversial. Last
year, Facebook shut down its decade-old system for identifying people in
photos. The company’s vice president of artificial intelligence cited the “many
concerns about the place of facial recognition technology in society.”
Several Black men
have been wrongfully arrested after flawed facial recognition matches. And in
2020, at the same time as the Black Lives Matter protests after the police
killing of
George Floyd in Minneapolis, Amazon and Microsoft issued moratoriums
on the use of their facial recognition products by the police in the US, saying
clearer laws on its use were needed.
Since then,
Washington and Massachusetts have passed regulation requiring, among other
things, judicial oversight over police use of facial recognition tools.
Crampton said
Microsoft had considered whether to start making its software available to the
police in states with laws on the books but had decided, for now, not to do so.
She said that could change as the legal landscape changed.
Arvind Narayanan, a
Princeton computer science professor and prominent AI expert, said companies
might be stepping back from technologies that analyze the face because they
were “more visceral, as opposed to various other kinds of AI that might be
dubious but that we don’t necessarily feel in our bones.”
Companies also may realize that, at least for the moment,
some of these systems are not that commercially valuable, he said. Microsoft
could not say how many users it had for the facial analysis features it is
getting rid of. Narayanan predicted that companies would be less likely to
abandon other invasive technologies, such as targeted advertising, which
profiles people to choose the best ads to show them, because they were a “cash
cow.”
Read more Technology
Jordan News