In October 2016, the United States Strategic Capabilities Office conducted a striking demonstration over China Lake, California, launching 103 Perdix drones from an F/A-18 Super Hornet. The drones, equipped with a distributed communication architecture, maneuvered into complex formations, traversed simulated battlefield space, and reconfigured mid-flight. Then–Secretary of Defense Ash Carter described the effort as “cutting-edge innovation” intended to keep the United States ahead of its adversaries. Yet the most revealing detail was that the swarm’s design originated not from a defense contractor, but from engineering students at the Massachusetts Institute of Technology, using an “all-commercial-components design.”

The accessibility of swarm technology to students underscores its inevitable global spread. As militaries worldwide accelerate deployment of unmanned systems, even seasoned analysts struggle to track developments, especially given the research occurring beyond public view. Nations are already experimenting with formations they call “swarms,” and the scalability of such systems raises stark possibilities. In theory, tens of thousands of armed drones could be coordinated into a single strike force, inflicting casualties on a scale comparable to the nuclear attacks on Nagasaki or Hiroshima.
The core challenge in building a swarm lies not in hardware acquisition—drones can be bought off-the-shelf or improvised from basic materials—but in programming. Effective swarms require robust communication protocols to share data, resolve conflicts, and allocate tasks. Task allocation algorithms assign roles to individual drones, enabling coordinated action. Once developed, these algorithms can be disseminated and embedded into any compatible platform.
Battlefield conditions complicate swarm deployment. Civilian presence, moving vehicles, and environmental hazards demand rigorous design, testing, and verification. Advanced capabilities such as heterogeneity—integrating drones of different sizes or operating domains—and flexibility—adding or removing units on the fly—remain relatively new. Yet the basic act of coordinating drones to deliver munitions is already well understood.
Armed, fully autonomous swarms combine two defining traits of weapons of mass destruction: capacity for mass harm and insufficient safeguards to prevent civilian casualties. India’s Army Day Parade recently showcased 75 drones, with plans to scale beyond 1,000. The U.S. Naval Postgraduate School has explored concepts involving up to one million drones operating across air, surface, and subsurface domains. To reach casualty levels akin to Nagasaki, estimates suggest around 39,000 armed drones would suffice, potentially fewer with high-yield payloads. China has already demonstrated control of 3,051 drones in a single synchronized flight.
The ethical and technical challenges are formidable. Noel Sharkey, AI expert at the University of Sheffield, stated that “in certain narrow contexts, such a weapon might be able to make that distinction within 50 years” when asked about autonomous systems discriminating between civilian and military targets. Georgia Tech’s Ronald Arkin has argued that lethal autonomous weapons may eventually surpass human operators in minimizing civilian harm, but current AI systems cannot yet navigate battlefield complexity.
Even with precise targeting algorithms, swarm deployment magnifies error risk. A 0.1 percent misidentification rate, multiplied across thousands of drones, becomes significant. Paul Scharre, a military AI specialist, emphasized that “the frequency of autonomous weapons’ deployment and use matters, too,” noting that repeated use increases opportunities for mistakes. Swarm communication can propagate a single error across the entire formation, and emergent behavior—collective actions arising from individual unit interactions—can amplify inaccuracies.
Recent conflicts illustrate the potency of unmanned systems. In the Armenia–Azerbaijan war, Azeri drones destroyed 144 tanks, 35 infantry fighting vehicles, 19 armored personnel carriers, and 310 trucks, contributing to Armenia’s rapid surrender. Drone swarms could extend such effects to mass casualty events, serve as deterrents for non-nuclear states, or be adapted for targeted assassinations. The 2018 attack on Venezuelan President Nicolas Maduro, involving two drones, hinted at the potential; a larger swarm could have altered the outcome.
Beyond conventional payloads, swarms could deliver chemical or biological agents, integrating environmental sensors and mixed-arms tactics to bypass established norms against such weapons. Even reduced risks to civilians leave escalation dangers—accidental strikes on uninvolved military forces could ignite broader conflicts.
International frameworks lag behind technological advances. Incorporating swarming-capable drones into the UN Register of Conventional Arms, with dedicated subcategories, could improve transparency. Renewed global discussions on lethal autonomous weapons are essential to curbing proliferation before swarms are deployed in warfare or terrorism.
