This post may contain links from our sponsors and affiliates, and Flywheel Publishing may receive
compensation for actions taken through them.
Artificial intelligence is already part of modern warfare, and that role is set to grow exponentially in the next three to five years. AI offers the opportunity to greatly increase military operations with increased precision and faster decision-making at a fraction of the cost. On the other hand, it presents significant security risks and ethical questions. Additionally, the rapid pace of developments threatens to outpace regulatory efforts. This article will examine the changes AI has and will bring to warfare in the 21st century.
Why This Matters
In 2017, Russian President Vladimir Putin delivered a rather chilling verdict on AI’s future importance in warfare, “The one who becomes the leader in this sphere (AI) will be the ruler of the world.” Russia, China, and the United States are spearheading what could be the next military revolution. Additionally, several other states are making great strides forward. The importance of harnessing artificial intelligence for defense will increase even further in the future.
A Paradigm Shift?
Few nations have the resources needed to build and maintain an advanced air force or navy. For example, a Ford-class aircraft carrier costs over $13 billion to build and another $700 million annually to maintain. The American F-35 program is the most expensive weapons program in history, with a lifetime cost of $2 trillion. AI and autonomous weapons systems threaten to upend the established world order by offering comparative performances at a fraction of the cost.
Taking out one drone isn’t all that difficult, but neutralizing several drones working together is another matter. The United States and the United Kingdom are already exploring the possibilities of drone swarms in training exercises. As the swarm communicates, it provides an accurate real-time picture of the battlefield. With improvements to autonomous navigation and the relatively low cost of individual drones, it may soon be possible for one operator to oversee dozens, if not hundreds, of drones.
The ongoing campaign against the Houthis in the Red Sea clearly demonstrates the phenomenon. The Iranian-backed proxy force is wreaking havoc on international shipping with cheap kamikaze drones. The one-shot weapons used in the air and at sea cost just a few thousand dollars apiece. They do not require significant maintenance or training to operate, and the present countermeasures used by the Americans and the British are ruinously expensive in comparison.
An overwhelming swarm attack could have devastating consequences. The capacity for non-state actors to inflict enormous harm on civilian population centers is orders of magnitude greater with autonomous weapons.
Developing Countermeasures
The prospect of unmanned aircraft systems (UAS) dominating the battlefields of the future is an outcome that’s not going unchallenged. A few tech firms are developing counter-UAS systems to take down enemy drones more efficiently and cost-effectively than conventional systems. RTX is developing Coyote, essentially an anti-drone drone that can be launched from various platforms. Anduril, a young upstart firm in the defense sector, is working on Roadrunner, a system that’s somewhere between a missile and a drone. The name is a playful jab at RTX’s system.
Another defense startup, Epirus, is developing a weapon system to counter swarms of drones. Leonidas uses a high-power microwave (HPM) system to fry drone control systems. It can be programmed to enforce a no-fly zone by differentiating between friendly and hostile drones. In 2023, the company secured a $66 million contract with the US Army to develop Leonidas. The tech showed some promise in tests run by the US Army in August 2024.
Lower Human Costs
Military spending might be on the rise globally; 23 out of 32 NATO allies now meet or exceed the 2% spending goal, but armies are getting smaller. France, the UK, and the United States are all facing a recruitment crisis for their armed forces. Similarly, shifting demographics in East Asia will greatly diminish the recruiting pool for China, Japan, South Korea, and Taiwan. In the near future, there will not be enough military-age adults willing to serve. Conscription seldom produces quality soldiers, and forcibly drawing young people out of the labor force will have dire consequences for the economy. With the global fertility rate reaching crisis levels in most developed countries, AI will be an important tool to help avoid combat losses and reduce dependency on manned vehicles.
The United States is already investing huge sums in developing autonomous support vehicles to support manned devices in combat. The Collaborative Combat Aircraft (CCA) program is intended to reduce the reliance on piloted aircraft. The CCA program attracted interest from Silicon Valley; Anduril beat out more established names in the defense sector to win a lucrative CCA contract. The company is also one of several developing underwater drones. China, Russia, and the United States also pursue quadrupedal-unmanned ground vehicles (Q-UGVs), which could sharply reduce frontline combat losses. Unlike flying drones, a Q-UGV will trigger booby traps and mines. Moreover, Q-UGVs can be fitted with multiple attachments for taking aerial and ground targets.
Faster Decision Making And Threat Analysis
US Air Force Colonel John Boyd developed the OODA loop to describe the decision-making process of combat pilots in the Korean War. The loop stands for “observe, orient, decide, act.” It was later applied to higher levels of combat and is standard doctrine across most Western militaries today. In modern warfare, speed of decision-making and disrupting the enemy’s process are vital.
One of the most important aspects of the OODA loop is the first: observe. As the former British general and Prime Minister Arthur Wellesley (best known as the Duke of Wellington) said:
All the business of war, and indeed all the business of life, is to endeavour to find out what you don’t know by what you do; that’s what I called ‘guessing what was at the other side of the hill’
In essence, AI could be the key to seeing what’s on the other side of the hill and drastically reducing the time it takes to make a decision. By seamlessly synthesizing vast quantities of data into one manageable interface, the commander on the ground would have a huge advantage over a less informed adversary. AI can also free up a great deal of man-hours in threat analysis. As Army Colonel Richard Leach explained in an article for the Department of Defense:
Let AI identify key pieces of information and maybe do some of the basic analysis. Let the analysts focus on the hard problem set so they’re not wasting time, resources, and people.
However, there is still some way to go before AI reaches the needed level. The US Space Force and Navy have found that generative AI is not currently suitable for military use.
Ethics And Arms Control
One of the most troubling future aspects of AI in warfare governs the use of lethal force. This is hardly a new concern; international agencies warned against “killer robots” in 2012. An automated weapon system would have no compassion or moral qualms. It would simply act without hesitation. Compounding this problem is the question of who would bear responsibility if an AI carried out a war crime. If, whether by mistake or intentionally, an autonomous drone killed a family in a war zone, who would be held to account? While the United States has clear guidelines on human oversight, other nations or even non-state actors may not follow the same rules.
Regulating militarized AI poses practical problems and a myriad of ethical concerns. One is simply the speed of technological progress. Just as in the private sector, AI advances faster than lawmakers can legislate for. This is a particularly acute problem in countries where passing legislation is a lengthy process and less so in autocratic regimes, which are typically adversaries of democracies. Another problem is that regulation via international law requires signatories to act in good faith.
History shows this approach seldom works out. In the 1920s, global powers tried to ease tensions by restricting the size of new warships. While initially successful, the resultant treaty lacked teeth and was ultimately jettisoned in the 1930s. Similarly, the Nixon administration tried to ease tensions with the Soviet Union with the Strategic Arms Limitations Talks (SALT 1 & 2). SALT II was signed in 1979 but never ratified after the Soviet invasion of Afghanistan. The current situation is complicated by deteriorating relations between the United States and Russia and rising tensions with China. There may be too little trust between world powers for comprehensive AI regulation to work.
New Risks
What if the human element is removed entirely from warfare? This isn’t an especially new idea; experts have grappled with the question for years. It is possible that in the not-so-distant future, lightning-quick decisions will be made involving autonomous weapon systems in the air, on land, and at sea. Human input will be minimal in a future hyperwar scenario.
On the plus side, removing the human element removes much of the friction that has plagued military operations since time immemorial. Stress-related hesitation and mistakes could be consigned to history. AI could reach a level of precision humans could never manage and sharply reduce collateral damage. Conversely, the capacity for bad actors to abuse AI’s capabilities is alarming. No hesitation also means no moral qualms. A repressive regime could use autonomous drones to clamp down on opposition or attack an adversary with no regard for civilian lives. Furthermore, there is a much greater risk of escalation with AI-controlled systems.
On May 6, 2010, automated trading on Wall Street triggered a “flash crash.” The Dow Jones index dropped 9% in a matter of minutes. The crisis was soon resolved, but the consequences of a “flash war” wouldn’t be easily reversed. Nuclear war is unthinkable to any rational person, yet in the cold logic of an algorithm, a preemptive nuclear strike could be the optimal move.
The Takeaway
Typically, military revolutions take some years to be fully realized. AI is likely to follow a similar but much shorter trajectory.
Smarter and faster decision-making is already evident and will only improve further in the near future. Autonomous weapons offer tremendous value for money and are an easier entry point for so-called lesser powers. Only the richest nations can afford a top-of-the-line air force, but drones can offer comparable performance at a tiny fraction of the cost. Autonomous drone swarms acting in tandem could overwhelm far more costly air defense systems. Countermeasures are already being developed, and these will accelerate in the coming years.
The dangers of AI are equally stark. There are serious ethical concerns to contend with, as machines could carry out truly heinous acts with no hesitation. In the wrong hands, militarized AI is a grave threat to peace and stability. Yet, as risky as developing AI for military use will be, the greater risk is in not developing it.
Take This Retirement Quiz To Get Matched With An Advisor Now (Sponsored)
Are you ready for retirement? Planning for retirement can be overwhelming, that’s why it could be a good idea to speak to a fiduciary financial advisor about your goals today.
Start by taking this retirement quiz right here from SmartAsset that will match you with up to 3 financial advisors that serve your area and beyond in 5 minutes. Smart Asset is now matching over 50,000 people a month.
Click here now to get started.
Thank you for reading! Have some feedback for us?
Contact the 24/7 Wall St. editorial team.