HomeAsiaWould an EU 'drone wall' stop Russian airspace incursions?

Would an EU ‘drone wall’ stop Russian airspace incursions?


Violations of national airspace by drones are on the rise in Europe. When European leaders discussed these events at a meeting in Copenhagen, Denmark, in October 2025, they responded by announcing plans for a defensive “drone wall.”

So what is a drone wall? Put simply, it is a network of sensors, electronic warfare equipment and weapons. This “multi-layered” defensive wall is intended to detect, track and neutralize incursions by uncrewed aircraft – drones.

If a drone wall was implemented in Europe, it would fulfill two main tasks: monitoring the situation along NATO’s eastern borders, where Russia is seen as a potential threat, and providing air defense against drones. It could potentially protect other airborne threats too, should hostilities break out.

It would not be a single, EU-owned system, but instead a network of national systems that can operate independently. The EU support would, however, help to speed up procurement and standardization, including full integration with NATO air defenses.

The sensors involved would probably include specialized micro-Doppler radar systems, which are sufficiently sensitive to distinguish drones from other similar-sized objects such as birds.

Jamming technology is also a key element for any effective drone defense system. These would send out radio frequency signals that interfere with the operation of an enemy drone – for instance, by disrupting the connection between the drone and the operator.

Finally, if the technology can be developed, a drone wall will eventually require drones to counter other drones. These small drones would require some means, probably using munitions, to intercept and destroy other incoming uncrewed aircraft.

The EU is keen to develop effective versions of these air-to-air interceptor “defensive” drones. They have so far proved very difficult to create.

The Ukraine war has shown that drones launched to attack foreign targets can often be deployed in large numbers, or swarms.

Drone swarms currently consist of individual aircraft each controlled by an operator. Russia has also launched hundreds of its “fire and forget” Shahed-based drones at a time in single wave attacks on Ukraine.

But fully autonomous drones, made possible with the help of AI, are on the horizon. These self-organized collectives of intelligent robots would operate in a coordinated manner and as a coherent entity. So similarly coordinated defenses will be needed.

Military strategists, defense organizations and arms manufacturers around the world see autonomous drone swarms as a crucial capability in future wars. These swarms would be able to attack multiple targets simultaneously, thereby overwhelming its defensive measures. That could include single, tactical-level attacks against individual soldiers, or widespread attacks against cities and infrastructure.

Autonomous drone swarms will still be vulnerable to signal jamming if they need to communicate with each other or a human source. But if each drone is individually programmed for a mission, they would be more resistant to attempts to jam their signals.

Effectively defending NATO territory against drone swarms will require militaries to match the enemy drone capabilities in terms of size and in levels of autonomy.

Legal dimension

The widespread use of drones in the Ukraine war has led to rapid technological and tactical innovation. An example can be seen in responses to attempts by both sides to jam drone signals.

One way the Ukrainian and Russian militaries have responded is to have drone operators launch small drones controlled via lightweight fiber optic cable. Up to 20km of fiber optic cable provides a direct connection to the operator and needs no radio frequency communications.

AI-based software also enables drones to lock on to a target and then fly autonomously for the last few hundred meters until the mission is over. Jamming is impossible and shooting down such a small flying object remains difficult.

As autonomous capabilities evolve, however, there are legal ramifications to consider. A high degree of autonomy or self-organization poses a problem for compliance with international humanitarian law.

Central concepts in this area include distinguishing combatants from civilians, and proportionality – weighing civilian harm against military requirements. This necessitates human judgment and what’s known as “meaningful human control” of flying drones and other so-called lethal autonomous weapon systems.

The principle of meaningful human control means that key decisions before, during and after the use of force should be made by people, not AI software. It also ensures that humans remain accountable and responsible in the use of force.

In order to ensure this is possible, machines must remain predictable and their actions explainable. The last of these requirements is not straightforward with AI, which can often work in ways that even experts do not understand. This is called the “black box problem.”

The expansion of autonomy in warfare means that the need for binding rules and regulations is as urgent as ever.

The European Union stresses that humans should be responsible for making life and death decisions. The difficult task, however, is to develop a drone wall with a high degree of autonomy and simultaneously enabling meaningful human control.

Peter Lee is professor of applied ethics and director of security and risk research, University of Portsmouth; Ishmael Bhila is lecturer and research associate of media sociology, University of Paderborn, and Jens Halterlein is research sssociate of media sociology, University of Paderborn

This article is republished from The Conversation under a Creative Commons license. Read the original article.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img