While the global debate around using
artificial intelligence in warfare heats up, Israel has brazenly deployed AI systems against the
Palestinians. Bloomberg reported last month that the
Israeli army deployed an
advanced AI model called Fire Factory designed to select targets for
air strikes and handle other military logistics. This wasn’t the first time Israel
had used AI in combat operations.
اضافة اعلان
AI deployment represents a significant shift in warfare and
brings incredible new risks for civilian life. Perhaps most concerning is that
Israel’s use of AI is developing beyond international or state-level
regulations. The future of
AI warfare is taking shape right now, and few have a
say in how it develops.
According to Israeli officials, the AI programs in operation
use large data sets to make decisions about targets, equipment, munition loads,
and schedules. While these items might seem mundane, we must consider
how Israel collects this information and the military’s track record in protecting
civilian populations.
Information Occupation
Israel has administered a total military occupation over
Palestinian populations in the
West Bank and Gaza since 1967. Every aspect of
Palestinian life in these territories is overseen by the Israeli military, down
to the
amount of calories Gazans consume. As a result of its complex occupation
infrastructure, Israel has compiled vast amounts of data on Palestinians. This
data has been a vital fuel for the rise of Israel’s vaunted technology sector,
as many of the country’s leading tech executives learned their craft in
military intelligence units that put this data to use.
Palestinians haven't consented to Israel's AI data collection, mirroring society's lack of true consent for many AI programs. While we agree to terms for services like Gmail, opting out means avoiding the internet
The military and defense contractors have created a hugely
profitable AI warfare sector using the West Bank and Gaza as weapons testing
laboratories. Across the Palestinian territories, Israel collects and analyses
data from drones,
CCTV footage, satellite imagery, electronic signals, online communications, and other platforms collected by the military. It’s even
rumored that the idea for Waze – the mapping software developed by graduates of
Israel’s military intelligence sector and sold to
Google for $1.1 billion in
2013 – was derived from mapping software designed to track Palestinians in the
West Bank.
It’s abundantly clear that Israel has plenty of data that
could be fed into AI models designed to maintain the occupation. Indeed, the
Israeli military argues that its AI models are overseen by soldiers who vet and
approve targets and air raid plans. The military has also implicitly argued
that its programs could suppress human analytic capabilities and minimize casualties
due to the sheer amount of data Israel collects. Analysts are concerned that
these
semi-autonomous AI systems could become autonomous systems quickly with
no oversight. At that point, computer programs will decide Palestinian life and
death.
Terms and conditions
There are additional elephants in the debate room. Israel’s
AI war technology is not subject to international or state-level regulation.
The Israeli public has little direct knowledge of these systems and say over
how they should be used. One could imagine the international outcry if Iran or
Syria deployed a similar system.
While the exact nature of
Israel’s AI programs remains confidential, the military has boasted about its use of AI. The military called
its 11-day assault on the Gaza Strip in 2021 the world’s first “AI war.” Given
the profoundly controversial nature of AI warfare and unresolved ethical
concerns about these platforms, it’s shocking but hardly surprising that the
Israeli military is so flippant about its use of these programs. After all,
Israel has seldom followed international law regarding warfare and its
understanding of defense.
There are other challenges regarding Israel’s deployment of
these weapons. Israel has a terrible track record when it comes to the protection
of Palestinian life. While the country’s public relations officials go to great
lengths to say that the military operates morally and protects civilians, the
fact is that even the most “enlightened” military occupation is antithetical to
the notion of human rights. In the social media age, even Israel’s most ardent
supporters question how the country sometimes behaves towards Palestinians.
As AI in warfare debates intensify globally, Israel openly uses AI against Palestinians. Bloomberg highlighted Israel's AI system, Fire Factory, for military logistics and target selection.
Perhaps the universal concern these programs raise is that
Palestinians haven’t consented to giving their data over to Israel and its AI
platforms. There is a morbid parable here for how society hasn’t really
consented to our data being used to
create many types of AI programs. Of
course, there are terms and conditions that we agree to for services like
Gmail, but we don’t have a viable choice to opt out unless we forgo the
internet altogether.
For Palestinians, the situation is obviously much more
grave. Every aspect of their lives, from when they go to work to how much food
they consume, is funneled to Israeli data centers and used to determine
military operations. Is this extreme future waiting for more societies around
the world? The direction of travel and the
development of these systems beyond regulation doesn’t bode well.
Read more Opinion and Analysis
Jordan News