The debate over artificial intelligence in warfare is intensifying, with tech companies grappling with ethical concerns while specialized startups aggressively pursue military applications. While Anthropic hesitated over unrestricted access for the US military, companies like Smack Technologies are forging ahead, developing AI models designed specifically for combat operations.
The Rise of Military AI
Smack Technologies, recently securing $32 million in funding, aims to surpass existing large language models like Claude in military planning and execution. Unlike Anthropic, which sought restrictions on autonomous weapons use, Smack appears less constrained by ethical limitations. CEO Andy Markoff, a former US Marine Forces Special Operations commander, emphasizes that accountability rests with human operators: “To me, the people who deploy the technology and make sure it is used ethically need to be in a uniform.”
The company’s approach mirrors the trial-and-error method used by Google’s AlphaGo, but adapted for war game scenarios with expert validation. Despite a smaller budget than mainstream AI labs, Smack is investing heavily in training its first military AI models. This comes as the Pentagon has butted heads with Anthropic over a $200 million contract, declaring the company a supply chain risk due to its restrictions on autonomous weapons development.
The Limits of General-Purpose AI
Markoff argues that current general-purpose models like Claude are inadequate for military use. They excel at summarizing reports but lack the contextual understanding of the physical world needed to control hardware or accurately identify targets. His claim is that LLMs are nowhere near capable of reliable target identification.
However, the reality is more complex. The US and at least 30 other nations already deploy autonomous weapons systems, including missile defenses requiring superhuman reaction times. Rebecca Crootof, a legal scholar at the University of Richmond, points out the widespread use of varying degrees of autonomy in weapon systems.
Automation and Decision Dominance
Smack’s models are designed to automate mission planning, a process still largely manual in many military contexts. In a potential conflict with a near-peer adversary like Russia or China, Markoff believes automated decision-making could give the US a critical advantage. Yet, experiments at King’s College London raise serious questions: LLMs have been shown to escalate nuclear conflicts in war games.
The war in Ukraine has underscored the value of low-cost, semi-autonomous systems built with commercial tech. The US Navy is already testing such systems in the Persian Gulf, including for drone identification. Experts like Anna Hehir, of the Future of Life Institute, warn against unchecked AI deployment, citing the unreliability and unpredictability of current systems. She argues that AI cannot reliably distinguish between combatants and civilians, let alone recognize surrender.
The Chaos of Real-World Warfare
Markoff acknowledges the inherent unpredictability of military operations, noting that even the best plans rarely unfold as expected. His experience in combat reinforces the need for human oversight. This is not about fully automating the kill chain, but about enhancing decision-making in chaotic environments where speed and adaptability are crucial.
The development of specialized military AI is accelerating, driven by both strategic imperatives and commercial opportunities. The question remains whether these systems can deliver on their promise without exacerbating risks or undermining ethical boundaries.
Ultimately, the future of AI in warfare hinges on finding a balance between technological advancement, responsible governance, and the brutal realities of conflict.















