The independent resource on global security

Mapping the military AI industry

Photo: Shutterstock
Photo: Shutterstock

The involvement of industry is a key aspiration of arms control initiatives on the responsible application of artificial intelligence (AI) in the military domain. But little attention is given to who should represent ‘industry’ in these processes—which companies or segments of the private-sector AI supply chain are relevant for the policy context.

This backgrounder aims to provide an overview of the military AI industry to help policymakers, as well as civil society and academic researchers, to understand the wide variety of products, actors and relationships involved.

The logic of involving industry

As United Nations Secretary-General António Guterres noted in a recent report, many states consider it important to involve a range of stakeholders—including industry, the scientific community and civil society—in policy initiatives on responsible military applications of AI. That extends beyond efforts under the auspices of the UN and includes processes like the Summit on Responsible AI in the Military Domain (REAIM).

States have given two main reasons for engaging with industry in these discussions. The first is that industry actors have highly relevant technical expertise and experience. They can help policymakers to better understand the technology. They can also contribute to generating and refining ideas about harnessing the benefits of AI in the military domain, and to identifying and mitigating the risks associated with these technologies.

The second reason is that design choices made during the development of an AI system can impact the ability of armed forces to use it in line with relevant legal and ethical frameworks. As AI systems are largely developed and produced by the private sector, states see a need to engage with industry to understand how to put principles of responsible use into practice, as well as to communicate to industry the implications of their technologies for international peace and security.

The ‘AI stack’: unpacking the industry segments

While the reasons for involving industry are relatively clear, far less so is what—or whom—this means in practice. There is no representative body for the AI industry, and even the concept of an AI industry is less straightforward than it sounds.

AI is not a discrete technology but an umbrella term that covers a wide range of techniques and technologies provided by different firms in different ways. As a result, what counts as a ‘military AI company’ is open to debate. For instance, Palantir is considered a key player in the current military AI wave, even though it does not focus on building AI models. Rather, Palantir provides data analytics platforms that incorporate AI models from other firms. There are also firms that specialize in providing services that support the adoption of AI, such as data management and test, evaluation, verification and validation (TEVV) but do not themselves produce AI systems.

One way to think about the industry—about relationships between the products, services and actors involved in developing a military AI system—is the ‘stack’—a term drawn from computer science referring to the layered ecosystem of software and hardware that enable applications of AI. This stack comprises three layers (see figure 1).

Figure 1. The military AI stack
Figure 1. The military AI stack

The bottom layer is hardware and infrastructure. It comprises physical computational resources (like computer chips), storage and networks. Examples include data centres, fibre-optic cables, energy grids, and cloud infrastructure such as Microsoft Azure or Amazon Web Services.

On top of that is the foundational software and programming layer. This comprises foundation AI models (for example large language models), coding and training frameworks, and data pipelines that turn data and algorithms into machine intelligence. This layer provides the capabilities from which downstream applications of AI are built. It is consequential because it determines factors like the safety and security of resulting applications. For instance, a foundation model vulnerable to prompt injections (malicious instructions to override developer guardrails) provides an opportunity for adversaries to manipulate downstream military AI into producing misleading outputs. Examples of foundation models include OpenAI’s GPT model (which powers the AI ‘chatbot’ ChatGPT) and Anthropic’s Claude model—the model at the centre of a current dispute with the United States Department of Defense.

The third layer is the application layer. This is where AI is integrated into domain-specific systems designed to perform tasks for military end-users. As well as foundation models, ‘narrow AI’ models—models designed to perform specific, bounded tasks—will be integrated into these systems. Another important feature of the application layer is that it determines how humans can interact with the AI system and it is where domain-specific risks associated with AI appear.

Military-specific applications go beyond things like targeting support and use in autonomous weapons and can also include military support functions such as logistics, maintenance, and command and control.

Products and services in the first two layers of the stack are usually general-purpose or dual-use, and it is only once we reach the level of applications that products and services are chiefly military-specific. Militaries rarely build an entirely separate stack from scratch, particularly because the type of infrastructure required for the most powerful AI models is prohibitively expensive. Rather, they tend to selectively separate and govern parts of a largely shared commercial stack. For instance, the specific Claude model (‘Claude Gov’) supplied to the US military was one specifically customized for US national security customers to be used in classified environments, while Claude is also used in a wide variety of commercial, public-facing applications.

The key actors

The military AI industry is not only diverse but also fast-changing. Military AI is big business, and many (new and old) firms want a slice of the pie. Consequently, the industrial landscape has evolved significantly over the last few years. New commercial relationships are being formed as new actors enter the ecosystem and more traditional arms companies, eager to secure a share of this lucrative market, acquire AI start-ups and develop AI-enabled products.

Companies involved in the provision of military AI can be divided into four categories, based on their business models (see figure 2).

Figure 2. Different categories of military AI industry actor, with examples
Figure 2. Different categories of military AI industry actor, with examples

The first category is ‘defence primes’, large legacy arms producers that have direct, multi-year contracts with government for military programmes and capabilities. They also manage complex supply chains to deliver advanced systems, including ‘big-ticket’ items such as fighter aircraft, armoured vehicles, submarines and air-defence systems.

These defence primes have long been involved in the development and production of military AI capabilities—for maintenance, logistics and for integration into weapons. For example, the Phalanx close-in weapon system produced by the RTX (previously Raytheon), which uses rule-based algorithms to identify, track and engage targets, was first developed in the 1970s. Israel Aerospace Industries (IAI) started developing the Harpy, a semi-autonomous anti-radar loitering munition that can use AI for target identification, in the late 1980s.

Because of their existing relationships with militaries, the defence primes are positioned at the upper layers of the stack. These companies have sought to keep up with the latest advances in AI by acquiring, subcontracting and partnering with arms-industry start-ups and firms with specific competencies in AI and related fields. For example, Sweden’s Saab acquired the US machine-learning computer vision developer CrowdAI in 2023, and the French defence prime Safran acquired AI geospatial analysis company Preligens (now Safran.AI) in 2024.

The second category of company is ‘neoprimes’ and ‘defence start-ups’. This includes firms established much more recently than the defence primes. They tend to specialize in the provision of software and data products and services to the arms markets, although some—such as Palantir and Anduril—also provide technologies for policing. Some of these companies specialize in providing niche products and services, such as autonomous drones (Skydio) or anti-swarm technology (Epirus). Some of the more prominent companies are known for providing battle-management software that aggregates and analyses data to support operational decision-making. This includes systems like Anduril’s Lattice, ShieldAI’s Hivemind and Helsing’s Altra.

One major way neoprimes and defence start-ups differ from the defence primes is in how they approach innovation. Rather than waiting for defence ministries to articulate a specific demand before developing a product, neoprimes tend to invest in their own R&D, usually backed by venture-capital companies, and to market finished products to military customers.

The third category is often referred to as ‘big tech’. This includes globally dominant technology corporations with diversified digital products and services and significant market capitalization. These are present in both civilian and military markets and often own and control infrastructure such as cloud platforms and data centres that underpin digital products and services. Often, big tech companies operate and control products and services at multiple layers of the AI stack. For example, both Meta and Google have been laying networks of deep-sea fibre-optic cables to support their cloud computing services, and have also been linked to military applications of AI.

Finally, there are the foundation model providers. These are firms that develop and maintain large-scale AI models capable of powering a wide variety of downstream tasks in different domains. Like the neoprimes and defence start-ups, these firms are relatively young—most were founded between 2010 and 2023, with funding from a mixture of venture capital and investments from big tech.

Originally many of the foundation model providers were united against the military use of their AI tools, but that stance has shifted over the past two years. For example, in January 2024 OpenAI rescinded a ban on its products being used for ‘military and warfare purposes’. Anthropic has historically prohibited certain military uses of its technology, and in 2024 announced a partnership with Palantir and Amazon Web Services to supply AI models for military purposes. Expanding into military markets provides an opportunity for these firms to tap into national defence budgets, for example, to continue funding the training of their foundation models.

Why do these nuances matter for policy debate?

The question for policymakers goes beyond simply whether or not to engage industry in the military AI governance debate, but which industry actors to engage with. The different layers of the AI stack—and the companies involved in them—shape military applications of AI in distinct ways.

For engagement with industry in debates on the governance of AI in the military domain to be effective, it is essential to bring the most relevant actors to the table. For instance, in the context of debate on AI decision-support systems and what they mean for compliance with international humanitarian law, it might make sense to approach actors involved in the top two layers of the stack: companies that develop foundation models and those that integrate them at the application level into military decision-making environments. In contrast, companies that are involved in the AI stack because they produce graphics processing units are likely to have limited impact on issues central to the responsible military AI policy debate.

Clearer criteria are needed to help identify the most relevant actors. Such criteria could include the proximity of a firm to use-of-force decisions, the degree of influence a firm has over an AI-enabled system’s behaviour, and a firm’s functional role in linking other product and service suppliers.

It is also incumbent on policymakers to consider how they involve industry actors. While the arguments for involving them are strong, firms remain strategic actors with commercial interests. Participation in policy initiatives can provide them with an opportunity to frame issues, set standards and define best practices in ways that advantage certain business models over others. Policymakers must recognize the presence of these kinds of motivations if industry engagement is to support, rather than distort, efforts to govern military AI.

An important next stage towards effective global governance of AI in the military domain is a deeper discussion about what industry engagement looks like and what trade-offs a more focused approach could bring. It is high time this discussion was had.

With support from the Netherlands Ministry of Foreign Affairs, SIPRI is conducting a project examining the role of the technology companies in the development of military AI technologies and norms.

ABOUT THE AUTHOR(S)

Dr Alexander Blanchard is a Senior Researcher in the Governance of Artificial Intelligence Programme at SIPRI.
Dr Vincent Boulanin is Director of the Governance of Artificial Intelligence Programme at SIPRI.
Laura Bruun is a Researcher in the SIPRI Governance of Artificial Intelligence Programme.