“Artificial intelligence” (AI) has become one of the buzzwords of the decade, as a potentially important part of the answer to humanity’s biggest challenges in everything from addressing climate change to fighting cancer and even halting the ageing process. It is widely seen as the most important technological development since the mass use of electricity, one that will usher in the next phase of human evolution. At the same time, some warnings that AI could lead to widespread unemployment, rising inequality, the development of surveillance dystopias, or even the end of humanity are worryingly convincing. States would, therefore, be well advised to actively guide AI’s development and adoption into their societies.
For Europe, 2019 was the year of AI strategy development, as a growing number of EU member states put together expert groups, organised public debates, and published strategies designed to grapple with the possible implications of AI. European countries have developed training programmes, allocated investment, and made plans for cooperation in the area. 2020 is likely to be an important one for AI in Europe, as member states and the European Union will need to show that they can fulfil their promises by translating ideas into effective policies.
Despite these positive developments, Europeans generally pay too little attention to one aspect of the issue: the use of AI in the military realm. Strikingly, the military implications of AI are absent from many European AI strategies, as governments and officials appear uncomfortable discussing the subject (with the exception of the debate on limiting “killer robots”). Similarly, the academic and expert discourse on AI in the military also tends to overlook Europe, predominantly focusing on developments in the US, China, and, to some extent, Russia. This is likely because most researchers consider Europe to be an unimportant player in the area. A focus on the United States is nothing new in military studies, given that the country is the world’s leading military and technological power. And China has increasingly drawn experts’ attention due to its rapidly growing importance in world affairs and its declared aim of increasing its investment in AI. Europe, however, remains forgotten.
Overall, there are many possible uses of AI in the military and security realm – most of which receive little public attention due to the dominance of the debate on killer robots. A comparative study of the three biggest European states reveals that France and Germany appear to be at opposite ends of the AI spectrum in Europe. France sees AI in general as an area of geopolitical competition and military AI in particular as an important element of French strategy. In contrast, Germany has been much more reluctant to engage with the topic of AI in warfare, and appears uninterested in the geopolitics of the technology. Military AI seems to be an acceptable topic of discussion for Germany only in arms control. For now, the UK is somewhere between these two positions. It is not as outspoken about military AI as France, but it is clearly interested in the military opportunities that AI provides. Independently of their governments’ positions, all three countries’ defence industries are developing AI-enabled capabilities.
As it is relatively early days in the development and use of operational AI-enabled military systems, it is possible that European countries’ positions will align over time. If they do not, however, this could pose real problems for European defence cooperation. Given that the EU is investing a great deal of effort in this area through instruments such as Permanent Structured Cooperation and the European Defence Fund, intra-European disagreements on one of the most crucial new technologies is a cause for concern. This is particularly true for pan-European projects such as FCAS, a fighter jet project involving France, Germany, and Spain that is set to include various AI-enabled capabilities. If France pushes for greater AI development – potentially even leading to LAWS while Germany does not, this cooperation could soon run into problems.
In this context, the EU could play an important role in helping member states harmonise their approaches to military AI. The EU already acts as a coordinating power for European national AI strategies, with “ethical AI” as its guiding principle. A similar approach could work for military AI. The European Commission should draft a coordinating strategy for military AI, outlining its ideas for areas of development in which common European engagement would be particularly useful (such as sharing systems to train algorithms), while setting red lines (in areas such as the development and use of LAWS). The EU should ask member states to respond to this guidance by outlining their ideas on, and approaches to, AI. In this way, European states could take advantage of one another’s expertise in AI development while working together to improve Europe’s military capabilities.
‘Not Smart Enough: The Poverty of European Military Thinking on Artificial Intelligence’ – Policy Brief by Ulrike Esther Franke – European Council on Foreign Relations / ECFR.