Information about the enemy has long been one of the most important resources in war. The US armed forces now want to take this to the extreme: with a new system they want to know the next movements of the enemy days in advance.
When does the enemy strike where and by what means? Being able to answer questions like these can quickly turn the strategic overhand in military conflicts. The US armed forces now want to predict the opponent’s next move several days in advance. And rely on artificial intelligence (AI) as a military crystal ball.
The Pentagon reported that the Global Information Dominance Experiments, or GIDE for short, called the test series, completed its third run in mid-July. The goal: With the help of cloud data centers, an AI running on them and a worldwide network of sensors, the US military wants to gain the “information excess” and thus achieve “superiority in making decisions”. In plain language: You want to enable a glimpse into the future.
Science fiction come true
That is less aloof than it sounds at first. Instead of using science fiction technologies, one would only have to bring together existing technologies at the highest level, explained General Glen VanHerck, the head of the Northcom command responsible for the North American region, in a press conference. The general has been experimenting with it for a year now. “What we see is the ability to break out of the reaction to events and to be able to be proactive. And we are not talking about minutes or hours. We are talking about days,” he replied to the perplexed inquiries of the journalists.
In the test, the information processing worked very well, said Northcom, similar experiments by the Air Force or the army were similarly successful, reports “The Drive”. The current experiments are less about checking the feasibility than about letting the existing systems work together ideally. The Space Force, decided by former President Donald Trump and rather ridiculed, also plays a decisive role. “We rely on their sensors, their threat detection and attack analysis capabilities,” said VanHerck.
Parking spaces as a danger signal
In an example he showed what exactly is behind the technology. “We take data from sensors around the world, not just military, but also civilian data. And we use them for reconnaissance in certain areas,” he explains. “The machines can then keep an eye on how many cars are parked, how many planes are waiting on a ramp and whether a submarine is getting ready to drop off or even a rocket is being launched. While it used to take days or at least hours someone had evaluated the data, it can now be done in seconds. ”
Ultimately, this leads to situations being recognized as potentially dangerous much earlier. Even from seemingly harmless snapshots. “We evaluate the data with artificial intelligence. For example, how many cars are parked in a parking lot in a certain location with reference to a counterparty,” explains VanHerck. “If the artificial intelligence detects a change, it can trigger an alarm, which is then checked with other sensors such as the satellite system.”

In this way one would recognize changes in the behavior of the opponents more quickly. “This gives us days in advance so that we can make strategic decisions. That gives us space for the necessary troop movements and the development of defensive measures, which can then be offered to the defense minister or the president as an option,” said the general.
No decision by AI
The last point in particular seems to be very important to the US armed forces: It is a matter of supporting the decision-makers and not replacing them, VanHerck emphasized several times. It is important to make this clear to the population and to Congress: “The machines do not make decisions. We do not rely on computers to weigh up for us how we can deter or defeat attackers.”
These concerns are not unfounded. Over the past decade, the US has relied more and more on the use of drones rather than manned attacks. Together with an AI that would decide the attack, the questions of moral responsibility for failed attacks or civilian casualties would be exacerbated.
In addition, the development of autonomous combat systems continues to advance. At the beginning of the year, Russia, for example, tested the use of its marker platform, an autonomously acting combat robot, in a snowy region. The robot covered 30 kilometers there on its own. The USA is now also considering the use of such systems. The argument: If you don’t do it yourself, you leave the field to competitors like China, according to a report by the Security Commission to President Biden.

David William is a talented author who has made a name for himself in the world of writing. He is a professional author who writes on a wide range of topics, from general interest to opinion news. David is currently working as a writer at 24 hours worlds where he brings his unique perspective and in-depth research to his articles, making them both informative and engaging.