
Written by: Sabina Kulueva, Security & Defense Working Group
Edited by: Anca Grigoriescu
This policy brief addresses the exclusion of military AI from the recent EU AI Act. It underscores the importance of military AI for the EU’s collective security and defence, and examines how it could contribute to the escalation of existing conflicts or the emergence of new tensions. Furthermore, it explains the implications of using AI to aid the decision-making process in foreign policy decisions. The analysis concludes with a set of recommendations aimed at addressing the identified gaps in the current legislation, thereby ensuring that future regulatory frameworks remain sufficiently adaptable to respond effectively to the risks posed by military AI for decision-making involving high stakes amid rapid technological advancement and increasing geopolitical instability.
Introduction
We live in the time of the Fourth Industrial Revolution, when as Schwab (2015) noted the combination of technologies blurs “the lines between the physical, digital, and biological spheres” and the transformations are taking place at an unprecedented pace. Each day, news outlets report on the rapid advancements in various branches of disruptive technologies, and the pace of development of artificial intelligence (AI) is particularly astonishing. Importantly, this technology possesses a dual-use nature, meaning that while it is inherently neutral, its positive or negative impact depends entirely on human use. To illustrate this, a case of healthcare could be taken. Integration of AI has been shown to facilitate the detection of breast cancer, enhance patient care through the review and summarisation of extensive patient histories, and reduce the time burden on medical professionals by automating document management, personalise treatment plans, accelerate the drug discovery process, etc. At the same time, it is important to note that the healthcare industry has access to vast amounts of sensitive patient data. In the absence of adequate security measures, this data could be vulnerable to cyberattacks. A similar risk is posed when AI systems interact with different platforms and consequently share data between them. Furthermore, if the medical field does not utilise ethically trained models this can exacerbate pre-existing biases related to gender or ethnicity. Therefore, in order to ensure that novel technologies are used to enhance societal wellbeing, promote good governance, reduce inequality, protect human rights, etc., and not to jeopardise these goals, the research, development and use of disruptive technologies requires governance.
In light of the potential risks, a range of actors, including governments, international organisations and AI tech companies, are adopting various measures to address these concerns. As for the EU, on July 12, 2024 the legislation known as the EU AI Act, a legally binding regulation, came into force. Characterised by its immense transformative capacity, AI has the potential to deliver substantial benefits while simultaneously introducing risks to all spheres without exception. In light of the evolving global order, marked by escalating polarisation, rising geopolitical instability, and the erosion of democratic institutions, a focus on EU security and defence was deemed more pressing, as the AI implications on this field could be potentially life-threatening. Therefore, this policy brief will draw attention to the impact of AI on military and political decision-making in shaping the EU’s foreign policy, an aspect that has been overlooked in the AI Regulation, and explain why this topic should be included in the revised version of the EU AI Act.
Problem Description & Background
While defining AI the EU AI Act emphasizes its autonomous and adapting nature and outlines that it has the potential to impact the real world through “predictions, content, recommendations, or decisions” (European Union, 2024b, p. 46). At present, AI has become an integral part of civilian life and is used to substitute human operators in interactions with clients, process vast amounts of texts and provide different outputs, facilitate self-driving vehicles, etc. Notably, effectiveness of the AI utilisation in civilian applications inevitably facilitates integration of this technology into political and military domains. Therefore, contemporary AI legislation ought to encompass provisions to address potential future risks in both domains.
Despite the broad extent of the EU AI Act, the content analysis shows that the Act does not explicitly or fully address military AI. So in accordance with the text of the Regulation, its scope does not cover “AI systems […] put into service, or used with or without modification exclusively for military, defence or national security purposes” as well as “AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union exclusively for military, defence or national security purposes” (European Union, 2024b, p. 45). Nevertheless, it is important to note that the scope of the Regulation extends beyond the domain of military applications of AI, encompassing scenarios where AI could be employed for non-military purposes. Specifically, the legislation outlines such situations as follows: “if an AI system developed, placed on the market, put into service or used for military, defence or national security purposes is used outside those temporarily or permanently for other purposes, for example, civilian or humanitarian purposes, law enforcement or public security purposes, such a system would fall within the scope of this Regulation” (European Union, 2024b, p. 7). Accordingly, while the Regulation excludes the usage of AI solely for military purposes, it covers cases when such systems have a dual purpose, meaning that they are intended for both civilian and military applications. Nevertheless, such delineation between domains of use and as the result impact on the legislation scope should be explained.
In the realist school of thought within international relations theories, the concepts of “high politics” and “low politics” offer a useful framework explaining the difference between the deployment of AI for civilian and military purposes. According to Viotti and Kauppi (2012, p. 40) while the former incorporates “military security or strategic issues”, the latter encompasses “economic and social issues”. Similarly, the EU justifies such a distinction based on Article 4(2) and Title V, Chapter 2 of the Treaty on European Union (European Union, 2024b, p. 7), which distinguish between common and national security issues. In particular, the Treaty emphasises that national security falls within the domain of fundamental state functions and thus constitutes the exclusive responsibility of each member state. Moreover, a fully common EU defence policy emerges only if and when the European Council unanimously agrees and each Member State ratifies it according to its own constitution. Furthermore, such a policy would align with the obligations of NATO, which some EU member states are signatories to (European Union, 1992, pp. 18, 38).
Notwithstanding, the omission of this subject is a substantial concern given the extensive range of applications of AI in the domains of defence and security. As Csernatoni (2024) contends, AI has the capacity to influence all aspects of violent conflicts, including “defense innovation, industry supply chains, civil-military relations, military strategies, battle management, training protocols, forecasting, logistical operations, surveillance, data management, and measures for force protection”. Given the geopolitical shifts occurring in the region, as well as the increasing integration of AI across various sectors at varying paces, including security and defense, it is evident that this novel technology will have a progressively more significant impact on all aspects of human life. Consequently, the following sections will clarify why excluding the use of AI in the military domain from the EU Act is concerning and explain the potential subtle impact on the respective decision-making process.
Omission of military AI is concerning
The EU AI Act does not regulate the application of AI “exclusively for military, defence or national security purposes”, and governs military AI only if it is utilised for civilian use as well (European Union, 2024b, p. 45). Nevertheless, blurring the lines between civilian and military application of the AI and excluding cases when novel technology is used solely for military objectives reduces the normative and regulatory strength of the document. As a consequence, the document lacks the flexibility to respond swiftly to the dynamic shifts in the geopolitical climate within the region.
To begin with, the EU draws attention to challenges to the “rules-based international order” by noting that various actors are seeking to weaken the application of “international humanitarian law and human rights law” and “democratic backsliding has become a defining global trend” (European Union, 2024a, p. 5). Moreover, it acknowledges that even within the EU, there is evidence of a “rise in authoritarianism, illiberalism and reactionary trends” (European Union, 2024a, p. 5). Regarding the security and defence of the region, the EU points out that “global threat landscape has become more alarming and complex, in the context of increasing fragmentation and polarisation” and identifies the war in Ukraine as both a regional and international security threat (European Commission, 2024, p. 5). By drawing attention to the rapidly changing global and regional security landscape, but failing to adequately address the topic of military AI, i.e. the role of military AI in the region’s common security and defence, the EU risks being unprepared to non-conventional and cyber-security threats. When it comes to the benefits of AI for security and defence matters, a military command anticipates that deploying AI will give them a tactical advantage by enabling military operators to make faster and more efficient decisions (Reinhold et al., 2025, p. 3). For example, Scoble and Cronin (2024) posit that this novel technology can process “geopolitical data, intelligence reports, and historical trends” and thereby inform decision-makers about potential dangers, whether terrorist attacks or cyber assaults and trace adversaries’ activities, including the buildup of armed forces. They also add that AI can be used to monitor social media and address targeted misinformation quickly if it constitutes part of an adversary’s destabilisation strategy. Furthermore, armed forces expect that autonomous vehicles will function over larger territories and for extended periods, thereby becoming more adaptable to unpredictable landscapes, including signal-deprived areas (Reinhold et al., 2025, pp. 3-4). As a result, AI will be transforming the various facets of contemporary warfare. In light of the evolving geopolitical landscape, with its ramifications for regional security and defence, it is imperative that the subject of utilising AI exclusively for military applications be given greater consideration in the revised Regulation.
At the same time, the absence of a separate document regulating military AI, or the omission of the topic from the current EU document, reflects a lack of coordinated procedures for responding to adversarial actions. In the context of the earlier discussion on the distinction between high and low politics, one can expect that at some point EU member states will be differently prepared to protect their own political and territorial integrity. Such a scenario may emerge as a consequence of the prevailing economic disparities among EU member states, which in turn impact the governments’ capacity to conduct research, develop, and utilise military AI, or to procure it. Besides, not all EU member states are NATO signatories, meaning they do not benefit from NATO’s collective defense guarantees, which provide military protection in the event of an armed attack. As a result, the same destabilising situation can have different implications for the individual member states. In such cases, the absence of common standards, reporting procedures and AI models poses risks not only to the targeted state but to the collective security of the entire region. To illustrate this point, consider a scenario in which each member state of the EU were to employ AI models that have been trained on diverse data sets and/or possess varying levels of computing capability. In the event of a cyber-attack targeting a single European state, it is likely that the remaining EU member governments would interpret and respond to the incident in divergent ways. Despite the shortcoming of the EU AI Act regarding military AI, Csernatoni (2024) argues that the European Defense Fund is a crucial platform for facilitating cooperation in this domain. As outlined in the relevant legislation, the Fund’s primary objective is to promote “investment in joint research and in the joint development of defence equipment and technologies, thereby encouraging joint procurement and joint maintenance of defence equipment and technologies” (European Union, 2018, p. 30). Taking all into account, the absence of a legislative document that would facilitate the creation of shared standards is the missing element. Therefore, given the far-reaching consequences of excluding military AI, the EU should act without delay and revise the existing AI Act.
AI and its impact on the military decision-making
While the extant literature on military AI primarily focuses on the application of this novel technology in scenarios where conflicts have already commenced, the pre-conflict phase remains largely unexamined (Erskine & Miller, 2024; Vold, 2024). Such scenarios may arise when political and military elites enhance decision-making by delegating and/or accepting information and recommendations from AI systems without adequate review. This is particularly relevant to cases of monitoring the behavior of adversaries (both state and non-state actors), predicting the likelihood and geographical location of potential conflicts and subsequently, developing strategies to address hypothetical threats. Therefore, this policy brief aims to explore how this emerging technology could affect political and military decision-making processes, paying particular attention to its potential to escalate existing conflicts or create new sources of tension.
In order to gain a more profound understanding of the significance of this particular aspect of military AI, it is helpful to refer to the notion of “structural risks” and the principles of the Just War Theory. As for the structural risks, Zwetsloot and Dafoe (2019) provide a compelling explanation by arguing that AI systems “will both shape and be shaped by the (often competitive) environments in which they are developed and deployed”. In other words, the political and military decision-making process will be influenced by the unique characteristics of these emerging technologies, including both their advantages and inherent risks. At the same time, the combination of political authority (through legislation and directives) and substantial financial resources empowers political and military institutions to influence the trajectory of AI development in line with national interests, potentially by mandating specific features or capabilities. Indeed, shifts in the geopolitical landscape of the world and the region, on the one hand, and the advantages of utilising AI to pursue a nation’s interests, on the other, altogether lead to the transformation of every stage of warfare. While real-world cases of the deployment of military AI in Ukraine and Gaza demonstrate how the technology is currently being utilised in conflict, its application to enhance human decision-making exemplifies its potential use in the pre-conflict phase. It is evident that in the near future, the advantages of AI systems over humans in terms of 24/7 readiness to work, non-stop attention to detail, unlimited capacity of storing and remembering information, self-control due to absence of emotions and logical reasoning will render human-machine teams a reality for governments worldwide. It is therefore reasonable to anticipate and be concerned that in the near future key foreign policy decisions may no longer be exclusively determined by human actors. Consequently, the revised EU AI Act should take such a scenario into account, as this kind of future may come unexpectedly soon.
While the concept of structural risks highlights the reciprocal influence between technology and decision-makers, Just War Theory specifically addresses the ethical dimensions of armed conflict. Regarding the principles of this theoretical approach, there are two key assumptions related to waging a war, which provide justification for including military AI in the EU AI Act.t. While the first tenet “justice of the war” denotes that engaging in war is “morally justified”, the second one “justice in the war” refers to morally acceptable ways of waging a war (Fotion, 2000, p. 22). The second assumption pertains to real-world instances of the deployment of military AI in conflict zones. For instance, the Israeli military has been reported to deploy AI-powered systems such as “The Gospel”, “Lavender” and “Where’s Daddy” that give recommendations regarding potential buildings where the militants are located, determine the identities of alleged Hamas members and monitor the movement of the militants by tracking their phone signals (Serhan, 2024). If AI is employed to formulate reactive strategies and/or implement diverse weapon systems, both novel and conventional and thus shape the course of the conflict, it is a matter of time when it will be deployed in the pre-war period as well. At this stage, military AI can exert a significant influence on decision-makers. Given the presence of multitude of flaws in AI models, the decision-makers could be exposed to an inaccurate description of reality. This could in turn cause them to resort to the use of force rather than resolving the conflict peacefully. It is evident that the present AI Act’s dismissal of the subtle impact of technology on political and military decision-making requires thorough revision. In order to address this shortcoming, the scope of the new Act should be broadened to allow for greater flexibility in responding to the region’s collective security and defense needs in the near future.
Implications of using AI in the pre-conflict phase decision-making
The deployment of AI to support decision-making processes in high-stakes political scenarios could potentially result in adverse outcomes, including the erosion of human accountability, the introduction of ethical and operational issues, the impairment of human judgment in reaching optimal decisions, and the misinterpretation of complex situations involving ambiguous or conflicting signals.
To start with, there is a growing public reliance on AI, with routine tasks increasingly delegated to automated systems. The delegation of tasks to AI systems, which involves the processing of large volumes of data, and the subsequent allocation of the newly available time to tasks requiring human creativity, supervision and decision-making, is a rational course of action. Nevertheless, it is important to acknowledge the potential drawbacks of diminishing human agency. According to evidence-based research “individuals and teams that rely on AI-driven systems often experience ‘automation bias’, or the tendency to accept without question computer-generated outputs” (Erskine & Miller, 2024, p. 139). An illustration of the ramifications of the situation in which AI output was not duly verified before undertaking consequential actions is exemplified by the case of a Palestinian worker who was erroneously apprehended by Israeli police. This incident occurred due to the Facebook translation service’s erroneous conversion of an Arabic phrase that read “good morning” into Hebrew as “attack them” and into English as “hurt them” (Berger, 2017). At first glance, the consequences of an AI system’s error may appear minor and limited to the individual level. Nevertheless, one should not disregard the possibility of a snowball effect. In the context of political and military decision-making, machine-generated errors that are not adequately reviewed and corrected by humans can lead to misinterpretations, potentially escalating situations and triggering broader, unintended consequences. While it may be posited that as AI systems become more sophisticated, the probability of such errors occurring decreases, it is crucial to acknowledge the sheer volume of data that AI models process. On the one hand, it could be argued that Article 14 of the EU AI Act on human oversight already cautions against “automatically relying or over-relying on the output produced by a high-risk AI system” when it comes to trusting data and suggestions provided by AI models. Nevertheless, it should be noted that this clause is applicable to civilian and dual-use AI, excluding purely military applications (European Union, 2024b, p. 60). Trusting AI models that can provide false information and/or recommendations due to malfunction and/or system inherent biases can indeed be costly, heightening tensions between states. In instances where the result falls short of absolute precision, the reliability of AI-provided information or recommendations is questionable, and their application by political and military decision-makers should be approached with caution. Without rigorous scrutiny of AI-generated outputs, individuals may gradually become not only accustomed to relying on machine-made decisions but even question their own judgement. This could result in a situation where real-world decision-making processes will no longer be exclusively in human hands. Consequently, such a scenario would present considerable challenges to international law, which is fundamentally premised on the principle of human accountability in high-stakes decision-making.
At the same time, automation bias discussed above can be exacerbated by the presence of a multitude of flaws that have already been detected in machine learning (ML) models. Dhabliya et al. (2024, p. 2) argue that biases in ML algorithms stem from the flawed data they are trained on. They (2024, pp. 4-6) explain that this happens when training data does not accurately reflect the group that they are designed to predict, model’s design erroneously prioritises specific groups or labeling decisions are subjective and/or based on historical prejudices. Moreover, developers can use outdated data, make mistakes while collecting data, merge datasets without considering differences in data patterns or structures, or algorithm outputs may shape the users’ engagement. Importantly, according to the authors (2024, pp. 4-6) these biases can result in unjust and inaccurate forecasts, partial decisions, and particular groups being exposed to unequal treatment, reinforcement of existing biases, misleading findings or projections, difficulty adapting to new circumstances, strengthening of the users’ assumptions and judgements that further deepen social divides. While Dhabliya et al. address the ML biases in general, Horowitz and Schaar (2021) analyse this topic within the military context. According to the scholars (2021, p. 10) while this technology can be utilised for “early warning and forecasting adversary behavior” one should be aware that AI models demonstrate opacity and break down when encountering shifts in data patterns. In other words, an AI model’s ability to reach a specific conclusion cannot be explained, and its reliability is contingent upon the stability of the conditions against which it was originally trained. Notably, the use of military AI in armed conflicts has already been the subject of substantial criticism from human rights organisations. Deployment of AI systems are reported to explain disproportionate casualties among the civilian population and massive scale of the destruction (Frankel, 2024; Serhan, 2024). Utilising AI in the pre-conflict phase is inevitable, and it is merely a matter of time before AI applications permeate and directly impact military and political decision-making processes. In view of the numerous biases that have already been identified in such models, and the strong ethical concerns associated with them, it is recommended that the current EU AI Act be amended to include provisions on military AI.
While the aforementioned issues were primarily related to the novel technology and human agency, the peculiarities of the decision-making process should not be dismissed. In this regard, it is important to note that potential risks associated with the utilisation of AI in the pre-conflict decision-making phase arise due to the unique characteristics of the human body, particularly the human cognitive limitations. While in an ideal setting, rational decision-makers endeavour to make the best decision possible, i.e. they would be “objectively searching all information for the best outcome”, in reality, the cognitive capabilities of humans are bounded, meaning they are forced to “select an alternative that is acceptable”, i.e. satisficing (Mintz & DeRouen, 2010, p. 68). As a result, military AI, which accompanies the decision-making process due to its significantly superior data-processing capabilities in comparison to those of the human brain, may blur the distinction between “optimal” and “satisficing” decisions. Despite the potential for enhanced outcomes through the integration of human expertise and the rapid computational capacity of AI, it is crucial to recognise the significant risks involved in the realm of territorial and political security. To illustrate this point, consider a scenario in which the government fails to adequately revise the output provided by AI regarding the behaviour of adversary (non)-state actor(s). The political and military elite may ultimately reach a decision that has adverse consequences due to the previously mentioned variety of flaws associated with the AI. Notwithstanding considerable advances in the field of AI, predictions remain inherently imperfect. This is due to the fact that international affairs, which are rooted in human interactions, cannot be fully anticipated, even with established international laws and norms. This is especially true in the context of the growing complexity of the global landscape. As a result, effective forecasting necessitates the consideration of a multitude of variables. According to Horowitz and Scharre (2021, p. 10) human judgement is required when operating in new and unfamiliar situations, as AI systems often demonstrate inadequate performance in such contexts. Given today’s polarising global order and rapid technological advancement, the time to make decisions contracts, thereby increasing the probability of erroneous decision-making. In situations involving high stakes, including those pertaining to the EU’s common defence and security, political and military elites must recognise that the absence of proper supervision of information and suggestions provided by AI models can hinder the ability to reach optimal decisions. In fact, such a failure can not only exacerbate existing interstate conflicts, but also engender new sources of diplomatic, political and military tensions.
Finally, despite the steady progress being made in the field of military AI, it would be imprudent to disregard the notion that, irrespective of the extent of training, such innovative technology interprets the human world and signals differently from humans. While in conventional decision-making scenarios, actors would typically rely on professional intuition and exhibit satisficing behaviour, the increasing integration of AI into the decision-making process, which is characterised by the absence of these features, may lead to an inadequate interpretation of the situation. According to Baker et al. (2025), researchers have recently discovered that chain-of-thought reasoning models, which “thinking” occurs in language comprehensible by humans, can mislead users, fail to complete complex tasks, or underperform on coding-based tests. It is evident that providing nuanced recommendations based on monitoring real-world data constitutes a genuinely challenging task for an AI system. Thus, reliance on such models to support decision-making could be dangerous. As Johnson (2021, p. 175) contends, AI algorithms “programmed to optimize pre-programmed goals” may misinterpret situations of double signalling, that is to say, when a rival actor both demonstrates determination to engage in conflict and attempts to avert or ease tensions. One relevant example of such a challenging situation stems from China’s behaviour during maritime conflicts. Song and Kim (2024, pp. 660–661) posit that the Chinese government’s approach to tensions related to the East and South China Seas is characterised by seeking a balance between appeasement of the global public and the domestic population. On the one hand, the state is cautious not to provoke nationalist sentiments, as this could be interpreted by neighbouring states as a manifestation of aggression. Conversely, public sentiment could shift towards a state of animosity in response to perceived government retreats on matters of territorial integrity and sovereign rights. Consequently, the Chinese government sends dual signals to both global and domestic audiences, in both instances referring to international law. While in the former case, it demonstrates “commitment to international law and support for peaceful resolutions”, in the latter case it focuses on “China’s ownership of disputed islands via legality and history” (Song & Kim, 2024, p. 661). Furthermore, Scoble and Cronin (2024) draw attention to the fact that AI models “may overlook non-quantifiable factors, such as cultural context or human psychology”. In the context of interstate affairs, particularly those involving conflicting parties, it is crucial to conduct a comprehensive analysis of all signals, including those designated as sensitive intelligence. This is further compounded by the unique characteristics of the cultural environment and human behaviour. Consequently, even partial reliance on AI algorithms in such high-stake scenarios can lead to the escalation of conflict, which could otherwise be avoided.
Taking into account the aforementioned human cognitive limitations, the growing reliance of people on novel technology, the variety of system biases and the complexity of international affairs, improperly supervised AI designed to support the decision-making process can adversely impact the pre-conflict phase. It is therefore essential that the topic of using AI exclusively for high-politics matters, which, as part of an extended decision-making chain, influences the EU’s foreign-policy stance, be addressed in the new AI legislation.
Policy Options & Recommendations
AI is gradually permeating all spheres of life and, as a dual-use technology, offers both advantages and presents risks. The technology itself is not inherently problematic; rather, it is the manner of its deployment that raises concerns. The present EU AI Act does not adequately address the potential consequences of the increasing reliance on AI as a means to inform political and military decisions regarding the security and defense of the EU. Therefore, in light of the far-reaching consequences of military AI, it is recommended that EU AI Act be reviewed and several changes incorporated.
To begin with, the current Regulation delineates between the spheres of high and low politics, incorporating cases in which AI systems are limited to civilian use or serve dual-use (civilian and military) purposes. However, this approach requires further refinement through the inclusion of explicit provisions for AI systems developed solely for military application. It is erroneous to assume that the use of AI to support decision-making in the pre-war phase influences only the foreign policy of individual EU member states. In contrast, the deployment of AI is not constrained by geographical or political boundaries, and the security and defence of the EU necessitates a unified stance on this matter. It is feasible to engage in collaboration on matters pertaining to high politics without compromising national interests, and the European Defence Fund serves as an example of this. The revised EU Act could include provisions establishing fundamental rules and shared standards regarding the use of machine-aided support in the surveillance of rival (non-)state actors’ behaviour, the analysis of respective intelligence data (including visual material), the forecasting of potential conflict situations, and the development of a pre-emptive alert plan. In particular, the new Regulation should underscore that responsibility for decision-making in the pre-conflict phase rests solely with human actors. Thus, each stage of the machine-aided decision-making process necessitates human oversight. Furthermore, the new EU AI Act should address the issue of automation bias and call for governments to establish internal accountability procedures. For instance, the degree of accountability borne by human operators could be proportionate to the extent of their reliance on AI systems. This would ensure that individuals aware of the sanctions and reputation risks adopt a critical stance towards the machine outputs. At the same time, the new Regulation should require that humans are able to oversee the AI’s “thinking process” and thus, in the case of the model’s misbehaviour, disregard its results. Consequently, models lacking human interpretability should not be employed for military purposes in any capacity.
Secondly, the new EU AI Act should warn member states of potentially dangerous scenarios resulting from complete reliance on AI output, be that in the form of information or recommendations. In light of the inherent biases in AI models, policymakers are obliged to adopt a critical stance when evaluating machine-generated outputs. This implies that any output which favors the implementation of coercive measures over diplomatic solutions should be double-checked. One way to conceptualise the ramifications of machine-aided decision-making that might give rise to novel conflicts or intensify prevailing ones is to establish comparisons with the ongoing conflicts in Gaza and Ukraine, which exemplify the ethical and operational risks associated with the use of AI-enhanced weaponry in modern warfare. The new legislation should outline the extent of casualties among the civilian population, the scale of destruction in war zones, and the degree of non-material damage, particularly as AI continues to be actively deployed in these armed conflicts. By adopting these measures, the revised AI Regulation would provide a comprehensive overview of the entire decision-making process (i.e. both pre-conflict and during conflict) assisted by AI across all phases of the conflict.
Thirdly, the text of the reviewed AI Act comprises clauses that caution decision-makers against the existential risk posed by AI, irrespective of the probability of disastrous outcomes, such as human extinction and/or the gradual loss of decision-making power on a wide scale, remaining low. Crucially, it should be underscored that the stakes are especially high when decisions informed by AI are made by the nuclear powers. Five EU member states exercise varying degrees of control over nuclear weapons, namely: France, Italy, the Netherlands, Germany, and Belgium. In light of the growing reliance on AI models and potential for the emergence of machine-aided teams, there is a particular necessity to emphasise human oversight in command and control decisions. As a result, the revised AI Act should alert member states to the potential consequences of using military AI in decision-making and emphasise the extent to which such risks can be mitigated or reversed.
Finally, the revised Regulation should grant legal protection to scientists involved in the research and development of AI technologies that could be used exclusively for military purposes in the pre-conflict phase. One particularly pertinent example that illustrates this phenomenon occurred during the Cold War, when on 26 September 1983, a Soviet operator named Stanislav Petrov received a signal from a satellite-based early warning system alerting him to a potential missile attack by the US (Bennett, 2022, pp. 19-21). Petrov arrived at the conclusion that there had been a malfunction in the system, and that the American government would not have taken the decision to attack the USSR. Later, he explained that Soviet scientists had been forced by political and military elites to install the early warning system despite the presence of technical issues that had not been fully resolved. Similarly, the potential for AI-generated outputs to instigate new conflicts of varying intensity, and/or to intensify existing tensions should be considered a compelling argument against the establishment of power relations. Inclusion of such an article would ensure that policymakers are prevented from placing undue pressure on scientists and thus force them to use non-trustworthy AI models. It is imperative that the legislation ensures the use solely of responsible AI in supporting political and military decision-making processes.
In light of aforementioned recommendations, the revised EU AI Act should include dedicated chapters addressing the exclusive deployment of AI for military purposes. It should also introduce provisions emphasizing the need to establish fundamental rules and shared standards for early warning and forecasting procedures. Additionally, the legislation should outline a range of scenarios illustrating how reliance on AI-generated outputs could exacerbate existing conflicts or ignite new tensions. Finally, it must ensure legal protections for researchers, safeguarding them from political or military coercion to disclose or implement improperly tested AI models.
Conclusion
AI has enormous transformative power and would undoubtedly change every aspect of human life. Yet the current EU AI law only covers civilian and dual-use AI, and excludes military AI. This policy brief addressed this shortcoming and argued that the use of AI in strategic decision-making can have a negative impact not only on the foreign policy of an individual state, but also on the EU as a whole, by exacerbating existing conflicts and/or creating new tensions. We are observing the instability of the geopolitical situation in the region and the world, the weakening of democratic institutions and international law, and increasing polarisation. Given its rapid data processing capabilities and/or capabilities to provide forecasting, governments would undoubtedly be tempted to use AI models to aid in the decision-making process. Of particular concern is the use of military AI not only during conflict, but in the pre-conflict phase. Given the known biases inherent in AI models and the complexities of human decision-making, relying on the information and recommendations provided by the machine without adequate verification constitutes a significant risk. In the worst-case scenario, it could even present an existential threat.
Although the EU AI legislation distinguishes between low and high politics, the case of military AI requires special attention. Given its far-reaching implications, the current legislation should be revised to incorporate a dedicated chapter on the use of AI for exclusively military purposes. This way, the EU can establish common rules for minimum cooperation in this field. By doing so, the EU would strengthen its preparedness for security and defence challenges affecting the entire region, but would also set a global example in the governance of military AI. So, in the former case, a revised AI Act would offer rules for increasing intergovernmental cooperation in the field of military AI, and, most importantly, would call for the research, development and subsequent use of transparent and responsible AI. In addition, the new legislation would provide a joint action algorithm in the event of an emergency, thus responding in a way aimed at minimising harm. Domestically, the EU is vocal in its commitment to uphold democratic and liberal values, the utmost respect for human rights and human dignity, the principles enshrined in the United Nations Charter, in its external action, and thus has a moral responsibility to contribute to a more peaceful and secure world order. This could be achieved by setting an example in efforts to regulate this technology, which has far-reaching implications for the future of humanity. Consequently, the EU should address the issue of military AI as soon as possible as windows for action are shrinking, otherwise it may be too late to prevent and/or reverse its negative effects due to the swift pace of technological advancement and growing geopolitical instability.
Bibliography
Baker, B., Huizinga, J., Gao, L., Dou, Z., Guan, M. Y., Madry, A., Zaremba, W., Pachocki, J., & Farhi, D. (2025). Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2503.11926
Bennett, S. (2022). System Reliability: A Cold War Lesson. In G. Adlakha-Hutcheon & A. Masys (Eds.), Disruption, Ideation and Innovation for Defence and Security (pp. 13–25). Springer International Publishing. https://doi.org/10.1007/978-3-031-06636-8_2
Berger, Y. (2017, October 22). Israel arrests Palestinian because Facebook translated “good morning” to “attack them.” Haaretz. https://www.haaretz.com/israel-news/2017-10-22/ty-article/palestinian-arrested-over-mistranslated-good-morning-facebook-post/0000017f-db61-d856-a37f-ffe181000000
Boulanin, V. (2019). The future of machine learning and autonomy in nuclear weapon systems. In V. Boulanin (Ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk. Volume I. Euro-Atlantic perspectives (pp. 53–62). SIPRI. https://www.sipri.org/publications/2019/research-reports/impact-artificial-intelligence-strategic-stability-and-nuclear-risk-volume-i-euro-atlantic
Csernatoni, R. (2024, July 17). Governing Military AI Amid a Geopolitical Minefield. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2024/07/governing-military-ai-amid-a-geopolitical-minefield?lang=en
Dhabliya, D., Dari, S. S., Dhablia, A., Akhila, N., Kachhoria, R., & Khetani, V. (2024). Addressing Bias in Machine Learning Algorithms: Promoting Fairness and Ethical Design. E3S Web of Conferences, 491, 02040. https://doi.org/10.1051/e3sconf/202449102040
Erskine, T., & Miller, S. E. (2024). AI and the decision to go to war: Future risks and opportunities. Australian Journal of International Affairs, 78(2), 135–147. https://doi.org/10.1080/10357718.2024.2349598
European Commission. (2024). Commission staff working document: Addendum to the proposal for a Regulation of the European Parliament and of the Council establishing the European Defence Industry Programme and a framework of measures to ensure the timely availability and supply of defence products (‘EDIP’) COM(2024(150). SWD(2024) 515 final, 1-92.
European Union. (1992). Treaty on European Union (consolidated version 2016). Official Journal of the European Union, C 202, 1–388.
European Union. (2018). Regulation (EU) 2018/1092 of the European Parliament and of the Council of 18 July 2018 establishing the European Defence Industrial Development Programme aiming at supporting the competitiveness and innovation capacity of the Union’s defence industry. Official Journal of the European Union, L 200, 30-43.
European Union. (2024a). European Parliament resolution of 28 February 2024 on human rights and democracy in the world and the European Union’s policy on the matter – annual report 2023 (2023/2118(INI)). Official Journal of the European Union, C, 1-24
European Union. (2024b). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Official Journal of the European Union, L, 1-144.
Fotion, N. (2000). Reactions to War: Pacifism, Realism and Just War Theory. In A. Valls (Ed.), Ethics in International Affairs: Theories and Cases (pp. 15–32). Rowman & Littlefield.
Frankel, J. (2024, January 11). Israel’s military campaign in Gaza seen as among the most destructive in recent history, experts say. AP News. https://apnews.com/article/israel-gaza-bombs-destruction-death-toll-scope-419488c511f83c85baea22458472a796
Horowitz, M. C., & Scharre, P. (2021). AI and International Stability: Risks and Confidence-Building Measures. Center for a New American Security. https://www.jstor.org/stable/resrep28649
Johnson, J. (2021). Artificial Intelligence and the Future of Warfare: The USA, China, and Strategic Stability (1st ed). Manchester University Press.
Mintz, A.& DeRouen, K. (2010). Understanding Foreign Policy Decision Making. Cambridge University Press.
Schraagen, J. M. (2023). Responsible use of AI in military systems: Prospects and challenges. Ergonomics, 66(11), 1719–1729. https://doi.org/10.1080/00140139.2023.2278394
Schwab, K. (2015, December 12). The Fourth Industrial Revolution. Foreign Affairs. https://www.foreignaffairs.com/world/fourth-industrial-revolution
Scoble, R., & Cronin, I. (2024, December 11). AI in Military Applications. Unaligned Newsletter. https://www.unaligned.io/p/ai-in-military-applications
Serhan, Y. (2024, December 18). How Israel Uses AI in Gaza—And What It Might Mean for the Future of Warfare. Time. https://time.com/7202584/gaza-ukraine-ai-warfare/
Song, E. E., & Kim, S. E. (2024). China’s dual signalling in maritime disputes. Australian Journal of International Affairs, 78(5), 660–682. https://doi.org/10.1080/10357718.2024.2394179
Viotti, P. R., & Kauppi, M. V. (2012). International Relations Theory (5th ed). Longman.
Zwetsloot, R., & Dafoe, A. (2019). Thinking about Risks from AI: Accidents, Misuse and Structure. The Lawfare Institute. https://www.lawfaremedia.org/article/thinking-about-risks-ai-accidents-misuse-and-structure