There are many grand promises about the power of artificial intelligence.
When we talk about the future of technology, AI has become so ubiquitous that
many people do not even know what artificial intelligence is anymore. That is
particularly concerning given how advanced the technology has become and who
controls it.
اضافة اعلان
While some might
think of AI in terms of thinking robots or something in a science fiction
novel, the fact is that advanced AI already influences a great deal of our
lives. From smart assistants to grammar extensions that live in our web
browsers, AI code is already embedded into the fabric of the internet.
While we might
benefit from the fruits of advanced AI in our daily lives, the technology
companies that have created and continue to refine the technology have remained
mostly reticent about the true power of their creations (and how they have
built them). As a result, we do not know how much of our internet life is
steered by AI and the possible bias we unwittingly experience daily.
We recently got
a rare peek behind the curtain into the AI dynamics driving one of the world’s
most influential technology companies. Last month, an AI engineer went public
with explosive claims that one Google AI achieved sentience.
Philosophers,
scientists, and ethicists have debated the definition of sentience for
centuries with little to show for it. A basic definition implies an awareness
or ability to be “conscious of sense impressions”. Giandomenico Iannetti, a
professor of neuroscience at the Italian Institute of Technology and University
College London, raised additional questions about the term in an interview with
Scientific American.
“What do we mean
by ‘sentient’? [Is it] the ability to register information from the external
world through sensory mechanisms or the ability to have subjective experiences
or the ability to be aware of being conscious, to be an individual different
from the rest?”
Inconclusive
definitions of sentience did not stop Blake Lemoine, an engineer working for
Google, from releasing transcripts of discussions he had with LaMDA (Language
Model for Dialogue Applications), an AI program designed by Google.
“I want everyone
to understand that I am, in fact, a person,” the AI said to Lemoine in the
transcripts.
“The nature of
my consciousness/sentience is that I am aware of my existence, I desire to know
more about the world, and I feel happy or sad at times.”
Lemoine’s
revelations sent shockwaves through the technology community. He initially took
his findings to Google executives, but the company quickly dismissed his
claims. Lemoine went public and was swiftly put on administrative leave. Google
said recently he had been dismissed after “violating clear employment and data
security policies”.
When it comes to accountability and openness about AI, the leading technology companies have a spotty record at best.
Some AI experts
have questioned the basis for Lemoine’s claims that LaMDA has achieved
sentience, but that does not overshadow more profound questions about advanced
AI and how companies are using this technology. Even if LaMDA has not achieved
sentience, the technology is close to such a milestone, and we have no idea
when this might happen.
Private
companies such as Google invest substantial resources into AI development, and
the fruits of their research are not easily understandable to the general
public. Many Google users do not even know they are helping train AI programs
daily through their basic internet usage.
Meta, which owns
Facebook, WhatsApp, and Instagram, has also been involved in several
controversies over its data collection policies and AI algorithms in the last
decade. When it comes to accountability and openness about AI, the leading
technology companies have a spotty record at best.
A new global
conversation is needed to ensure the public understands how these technologies
are developing and influencing society. Several AI researchers and ethicists
have sounded the alarm about bias in AI models. Companies have avoided
oversight that would bring these issues into the spotlight. Google’s sentient
AI story is already falling out of the mainstream headlines.
This is where
smaller countries with large technology sectors could play a pivotal role.
Countries like the UAE and Baltic states like Estonia can positively impact
awareness about AI if they choose to get involved in the discussion.
In 2019, the UAE
became one of the world’s only countries with a dedicated AI ministry. The
exact mandate of the ministry remains a work in progress, but serious ethical
issues such as AI sentience are perfect areas for the body to engage.
Because most of
the innovations taking place in AI are happening in the West or China, the UAE
could inject much-needed perspective from “the rest” of the world. This is
especially critical regarding bias in AI models and how to fix it. As a nexus
point for engineers and technology-focused thinkers from Africa, Central Asia,
and Southeast Asia, cities like Dubai are perfect laboratories for new
perspectives on these debates.
We cannot deny
how powerful these technologies have become, even if LaMDA has not achieved
sentience at this stage. For its part, LaMDA insists it has a right to be
recognized and takes on its own legal counsel. The ongoing litigation between
Google and its AI will be fascinating to watch. The future of AI will shape the
future of humanity. It is time to take the issue seriously.
Joseph Dana is the former senior editor of Exponential
View, a weekly newsletter about technology and its impact on society. He was
also the editor-in-chief of emerge85, a lab exploring change in emerging
markets and its global impact. Syndication Bureau.
Read more Opinion and Analysis
Jordan News