Gaza: Israel uses AI to attack when it comes to deciding

Gaza: Israel uses AI to attack when it comes to deciding

Use in the Gaza War
The AI ​​gives him 20 seconds. Then the soldier has to decide whether a person dies






In the recent attacks of Israel in Gaza, hundreds of civilians die again. One possible reason: the army rely on artificial intelligence. And that has her pitfalls.

The ceasefire in Gaza has failed. The Israeli army again bombers the area. According to the Palestinian information, over 400 people are said to have died on the first night since the fighting of the fights. Including numerous children. And again the question arises: Why do so many civilians die in Gaza? Because Israel simply bombs so much, everyone, everything, without difference, without any intelligent control?

In fact, one reason could be on the contrary – in the use of artificial intelligence.

The use of AI through the military is not a new phenomenon, says Frank Sauer from the Bundeswehr University in Munich. “Ki has the same benefit for the military as well as in civil life, it helps with the organization. AI is used in all areas. For example, combat flyers can order their spare parts on the basis of evaluated data.”

Orthodox against secular

The war of the children – Israel’s future is decided in the delivery room

However, Israel’s military not only uses KI for logistical or administrative purposes. With the help of AI, the army marks goals, which are then released for shooting. The programs are called “Lavender”, “Ghospel” and “Where is Daddy?”

A point system in Gaza decides on life and death

“Lavender” should make up possible human goals; “Ghospel” recognizes buildings and infrastructure that may be used by opponents. And “Where is daddy?” can determine the whereabouts of people. This was shown by the Oscar-crowned journalist Yuval Abraham. The Israeli journalist was able to speak to half a dozen anonymous sources within the Israeli military.

“The Gaza Strip is very precisely enlightened and monitored. The Israeli military has a lot of data points that you can assign to each individual individually,” says Sauer. Where was a person? How long? Who did she talk to? How often? “Lavender produces a score from this data that with a certain probability indicates whether the person is a Hamas member,” explains the military expert.

Kim Jong-Unn Kind Joe ae is the proverbial flagship daughter of the North Korean dictator

Series – children of power

North Korea’s dictator daughter is only called “the beloved child”

The result: The AI ​​marked tens of thousands of destinations in the densely populated Gaza Strip, especially at the beginning of the war. So many that a human review was hardly possible. An average of 20 seconds, reports Abraham, had the soldiers to decide on the next attack. “The problem is not necessarily that you use AI to analyze information and identify goals, but the speed that creates this,” says Laura Bruun, from the Sipri Peace Research Institute. The human decision would take a back seat. Especially since the phenomenon is often observed that people of the machine are more familiar than themselves. “Automation bias” is called this effect. According to Abraham, the AI ​​had an error rate of ten percent.

Dead Palestinians in Deir El-Balah: No place in Gaza seems safe

Not compatible with international law

“From a military point of view, something went wrong here,” says Sauer. A review of the goals in Gaza only took place in terms of course. For example, audio recordings were only checked as to whether it was a male, adult voice. Bruun and Sauer agree: the standards used to release goals were far too low. Abraham reports that the Israeli military has set a quota for air strikes: 15 to 20 dead civilians are accepted who manages to kill a Hamas member of the lower ranks.

Another factor that should have led to the high number of dead: Israel has killed target persons in their own house. The AI ​​tool “Where is Daddy?” Informed the responsible units as soon as a target person entered their own four walls. The building was then bombed at night. The army probably wanted to make sure that the target person was not somewhere else before the air. The consequences of this strategy may have been aware of those responsible. The name of the program gives at least one indication of who was mostly hit: fathers – and their families.

Such a procedure can hardly be reconciled with the law of war, says political scientist Frank Sauer. “You can’t just throw a 500 pound of bomb on a house just because with a certain probability that you may be there, that a Hamas member may be there and leave out completely out who may still be there.” Israel justifies the use of the AI ​​with this: their use helps to comply with humanitarian international law: the more precisely and more precisely the target marking, the less civilian victims.

Basically, a AI can ensure that fewer civilians are harmed and only the “Bad Guys” are attacked in a targeted manner, also admits Frank Sauer. “In the light of the story, however, I would make question marks because we succeed in really exploiting this potential without stamping into the traps that the use of AI represents us at the same time.”

In the end, only people could decide whether a goal of a goal is actually a legitimate goal, says Laura Bruun. “The aim is to distinguish: Who is a combatant and who is not? But that’s not always so clear. What if a fighter arises? Or is wounded? It is also about proportionality. It is very difficult – if not impossible – to fake such judgments for a machine or a AI.” That is why Frank Sauer puts it: “When using AI by the Israeli military, a lot went wrong in the interaction between man and machine.”

The software says: troublemaker. Or not

Will the Israeli military now fall back on AI to the same extent on the Gaza Strip on the Gaza Strip? Sauer it considers it “plausible” that AI will continue to be used. Journalist Yuval Abbrbrbrrbraham even has indications that it will soon go on: Israel is working on a chatted program to monitor the occupied areas. The AI ​​could be asked questions about certain people – the computer would then answer whether this person is a possible “troublemaker”.

It sounds like science fiction. Still.

Source: Stern

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Posts