Artificial intelligence has moved from research labs into the core of modern military operations, influencing logistics, intelligence analysis, wargaming, targeting, and even weapons with varying degrees of autonomy. Ukrainian forces deploy AI-enabled drones, the Israel Defense Forces accelerate targeting through AI in Gaza, and the U.S. Department of Defense uses AI to identify airstrike targets. This rapid integration has triggered urgent debates over how such systems should be governed.

In the United States, the Biden administration’s executive order on AI directs the preparation of a memorandum on military and intelligence applications, with finalization expected soon. The Trump campaign has pledged to rescind that order and pursue large-scale initiatives to reduce regulatory constraints. Internationally, Washington is working with allied nations to expand the first global agreement on military AI use—a non-binding political declaration—before the second Summit on Responsible Artificial Intelligence in the Military Domain.
A critical challenge is that governance discussions often rest on misconceptions. Treating “military AI” as a single, monolithic category ignores its nature as a general-purpose technology with highly diverse applications. A rule effective for one use case may be irrelevant or harmful in another. Moreover, military AI is not confined to armed forces; intelligence and diplomatic services also employ AI to shape operational environments. Focusing only on battlefield use risks leaving major gaps in oversight.
Another misconception is equating “responsible AI” with governance. As Commissioner Kenneth Payne stated at a Wilton Park dialogue, “AI will change the strategic balance of power, giving rise to new security dilemmas where the most responsible course of action may be to not regulate beyond what the law already requires.” Such ambiguity in the term “responsible” complicates building global consensus, as interpretations vary widely across geopolitical divides.
Legal frameworks, particularly international humanitarian law (IHL), dominate current governance approaches. The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy—endorsed by over 50 countries—reaffirms that AI in armed conflict must comply with IHL’s core principles. Many states, including the U.S., U.K., and Russia, maintain that IHL is sufficient for regulating lethal autonomous weapons systems. While AI does not alter the fundamental obligations of states under existing law, relying solely on IHL is inadequate. Numerous AI applications, such as decision-support tools or intelligence analysis systems, fall outside its scope yet still influence the pace and scale of conflict.
In operations outside armed conflict, international human rights law offers more protective standards. A March U.N. General Assembly resolution urged states to “refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law,” though it did not specifically address military or intelligence contexts. Without applying these protections, wartime exceptions risk becoming default norms.
Technical realities further complicate enforcement. Unlike drones, AI tools may leave no detectable signature, making it difficult to verify their use or ensure compliance. This ease of concealment increases the risk of violations—such as automation bias leading to civilian misidentification in dynamic targeting—without accountability.
Given these limitations, policy measures may offer more practical governance than new legal instruments. Political declarations, codes of conduct, and rules of engagement can guide behavior where law is ambiguous or silent. The Political Declaration is a step forward, but its scope is narrow, applying mainly to Western militaries and excluding intelligence agencies. Expanding its coverage to all defense-related AI applications, clarifying legal applicability across use cases, and sharing best practices are essential next steps.
Alliances like NATO and the Five Eyes intelligence network provide natural platforms for exchanging risk mitigation strategies, but engagement with China and the Global South is equally important. Building confidence does not require shared values—only shared interest in preventing destabilizing uses of AI. Capacity building is also critical, ensuring that all endorsing states have the technical means to implement agreed standards.
As the Responsible AI in the Military Domain summit approaches, the opportunity exists to strengthen governance beyond legal compliance, address persistent myths, and extend protections to the full spectrum of AI applications in defense and intelligence.
