Let’s start with the uncomfortable truth. We have lost
control of artificial intelligence. This shouldn’t be too surprising,
considering we likely never had any control over it. The maelstrom at OpenAI
over the abrupt dismissal of its chief executive, Sam Altman, raised
accountability questions inside one of the world’s most powerful AI companies.
Yet even before the boardroom drama, our understanding of how AI is created and
used was limited.
اضافة اعلان
Lawmakers worldwide are struggling to keep up with the pace
of AI innovation and thus can’t provide basic frameworks of regulations and
oversight. The conflict between Israel and Hamas in Gaza has raised the stakes
even further. AI systems are currently being used to determine who lives and
dies in Gaza. The results, as anyone can see, are terrifying.
In a widespread investigation carried out by Israeli
publication +972 Magazine, journalist Yuval Abraham spoke with several current
and former officials about the Israeli military’s advanced AI war program
called “the Gospel.” According to the officials, The Gospel produces
AI-generated targeting recommendations through “the rapid and automatic
extraction of intelligence.” Recommendations are matched with identifications
carried out by a human soldier. The system relies on a matrix of data points
with checkered misidentification histories, such as facial recognition
technology.
The result is the production of “military” targets in Gaza
at an astonishingly high rate. In previous Israeli operations, the military was
slowed by a lack of targets because humans took time to identify targets and
determine the potential of civilian casualties. The Gospel has sped up this
process with dramatic effect.
The conflict between Israel and Hamas in Gaza has raised the stakes even further. AI systems are currently being used to determine who lives and dies in Gaza.
Thanks to the Gospel, Israeli fighter jets can’t keep up
with the number of targets these automotive systems provide. The sheer gravity
of the death toll over the past six weeks of fighting speaks to the deadly
nature of this new technology of war. According to Gaza officials, more than
17,000 people have been killed, including at least 6,000 children. Citing
several reports, American journalist Nicholas Kristof said that “a woman or
child has been killed on average about every seven minutes around the clock
since the war began in Gaza.”
“Look at the physical landscape of Gaza,” Richard Moyes, a
researcher who heads Article 36, a group that campaigns to reduce harm from
weapons, told the Guardian. “We’re seeing the widespread flattening of an urban
area with heavy explosive weapons, so to claim there’s precision and narrowness
of force being exerted is not borne out by the facts.”
Militaries around the world with similar AI capabilities are
closely watching Israel’s assault on Gaza. The lessons learned in Gaza will be
used to refine other AI platforms for use in future conflicts. The genie is out
of the bottle. The automated war of the future will use computer programs to
decide who lives and who dies.
While Israel continues to pound Gaza with AI-directed
missiles, governments and regulators worldwide need help to keep up with the
pace of AI innovation taking place in private companies. Lawmakers and
regulators can’t keep up with the new programs and the programs being created.
The New York Times notes that “that gap has been compounded
by an AI knowledge deficit in governments, labyrinthine bureaucracies, and
fears that too many rules may inadvertently limit the technology’s benefits.”
The net result is that AI companies can develop with little or no oversight.
This situation is so dramatic that we don’t even know what these companies are
working on.
The lessons learned in Gaza will be used to refine other AI platforms for use in future conflicts. The genie is out of the bottle. The automated war of the future will use computer programs to decide who lives and who dies.
Consider the fiasco over the management of OpenAI, the
company behind the popular AI platform ChatGPT. When CEO Sam Altman was
unexpectedly fired, the internet rumor mill began fixating on unconfirmed
reports that OpenAI had developed a secret and mighty AI that could change the
world in unforeseen ways. Internal disagreement over its usage led to a
leadership crisis at the company.
We might never know if this rumor is true, but given the
trajectory of AI and the fact that we cannot understand what OpenAI is doing,
it seems plausible. The general public and lawmakers can’t get a straight
answer about the potential of a super-powerful AI platform, and that is the
problem.
Israel’s Gospel and the chaos at OpenAI mark a turning point
in AI. It’s time to move beyond the hollow elevator pitches that AI will
deliver a brave new world. AI might help humanity achieve new goals, but it
won’t be a force for good if it is developed in the shadows and used to kill
people on battlefields. Regulators and lawmakers can’t keep up with the pace of
the technology and don’t have the tools to practice sound oversight.
While powerful governments around the world watch Israel
test AI algorithms on Palestinians, we can’t harbor false hopes that this
technology will only be used for good. Given the failure of our regulators to
establish guardrails on the technology, we can hope that the narrow interests
of consumer capitalism will serve as a governor on the true reach of AI to
transform society. It’s a vain hope, but it is likely all we have at this
stage.
Joseph Dana is a
writer based in South Africa and the Middle East. He has reported from
Jerusalem, Ramallah, Cairo, Istanbul, and Abu Dhabi. He was formerly
editor-in-chief of emerge85, a media project based in Abu Dhabi exploring
change in emerging markets. Twitter: @ibnezra
Disclaimer:
Views expressed by writers in this section are their own and do not necessarily reflect Jordan News' point of view.
Read more Opinion and Analysis
Jordan News