Military

What AI Means For Future Military Conflict

Thinkstock

The advent of artificial intelligence (AI) presents great opportunity and risk. This is as true for the private sector as it is for the military. AI could boost the effectiveness of military operations with real-time intelligence to seize fleeting tactical opportunities. Additionally, unmanned devices are far cheaper to operate than conventional machines and carry fewer risks for human operators. However, there are also significant ethical and security questions that arise with the use of AI in combat. This article will examine what AI means for the future of warfare. 

Why This Matters

Adam Berry / Getty Images

“The one who becomes the leader in this sphere (AI) will be the ruler of the world” was the rather ominous verdict of Russian President Vladimir Putin in 2017. Seven years later, AI has advanced considerably and China and the United States have invested heavily in their AI programs. The importance of harnessing artificial intelligence for defense will increase even further in the future. 

Command and Control

The Pentagon In Arlington, Virginia
2022 Getty Images / Getty Images News via Getty Images

Since the Industrial Revolution, the world’s major powers have been able to equip, field, and sustain vast armies. However, controlling huge, combined forces in the chaos of battle has always proved extremely difficult. The huge, wasteful offensives in World War One often failed because of insufficient communication after troops were sent over the top. Huge opportunities were squandered by commanders unable to keep track of their men. 

Modern conflicts are even more complicated and involve coordinating different allies and service branches in limited, asymmetric conflicts. In such wars, speedy decision-making and seizing fleeting tactical opportunities are paramount. Keeping track of and securing the flow of information in a contemporary war has moved beyond human comprehension. Artificial intelligence provides the means to synthesize different information streams into a manageable battlefield picture. The commander on the ground can quickly make key decisions based on sound intelligence and achieve a level of cohesion the generals of yesterday could dream of.

Maintaining the flow of information on the battlefield while denying it to the enemy has always been a priority for commanders. The value of knowing where hostile forces are at any given moment can hardly be overstated. In the days of yore, a general had to make do with the opinion of a scout on horseback. Aerial reconnaissance was a major step forward but most wars in history were fought with limited reliable intelligence. Command and control (C2) is a core tenet of the US military and with the help of AI, the “fog of war” may be lifted or at least sharply reduced. 

Drone Swarms 

Photo by John Moore / Getty Images

Taking out one drone isn’t all that difficult but neutralizing several drones working together is a whole other matter. The United States and the United Kingdom are already exploring the possibilities of drone swarms in training exercises. As the swarm communicates it provides an accurate real-time picture of the battlefield. With improvements to autonomous navigation and the relatively low cost of individual drones, it may soon be possible for one operator to oversee dozens, if not hundreds, of drones. 

As well as reconnaissance, drone swarms could be used for high-value military targets. An MQ-9 Reaper unit (four aircraft plus a control station) costs $56.5 million but a kamikaze drone like the Switchblade 300 costs a fraction of that price (around $80,000). Other nations have even cheaper one-way drones and fending them off is ruinously expensive with conventional arms. Because they are so much cheaper to manufacture and maintain, they could allow a much weaker military to compete with a more powerful adversary. Ukraine has effectively destroyed Russia’s Black Sea fleet without a navy but with drones and missiles. 

Drone swarms could upend the existing balance of power. 

Countermeasures 

Shutterstock

The prospect of unmanned aircraft systems (UAS) dominating the battlefields of the future is an outcome that’s not going unchallenged. History shows that any new weapon of war is invariably followed by an effective countermeasure. When tanks debuted in World War One, the Germans quickly figured out an anti-tank rifle. By World War Two, tanks were far better but the United States devised an effective lightweight anti-tank weapon: the M1A1 Bazooka

A few tech firms are developing counter-UAS systems to take down enemy drones more efficiently and cost-effectively than conventional systems. RTX is developing Coyote, essentially an anti-drone drone that can be launched from a variety of platforms. Anduril, a young upstart firm in the defense sector, is working on Roadrunner, a system that’s somewhere between a missile and a drone. The name is a playful jab at RTX’s system. Drones are also vulnerable to jamming devices. Lithuania sent Ukraine thousands of Skywiper Electronic Drone Mitigation 4–Systems (EDM4S) to counter Russian drones. Naturally, this is leading to the development of drones that can operate in GPS-denied environments

AI can also free up a great deal of man-hours in threat analysis. As Army Colonel Richard Leach explained in an article for the Department of Defense:

Let AI identify key pieces of information and maybe do some of the basic analysis. Let the analysts focus on the hard problem set so they’re not wasting time, resources, and people. 

Policy and Ethical Concerns 

Singapore+F-35 | 210724-D-TT977-0241
secdef / Flickr

In January 2023, the Department of Defense announced updates to its autonomous weapons protocol, Directive 300.09. These changes reflected rapid advances in technology over the past decade. The most notable change was the language and how autonomous and semi-autonomous weapons systems are defined. The earlier 2012 text used the phrase “human operator”, the revision simply reads “operator.” Which implies non-human control of weapons systems. 

Though the Department of Defense maintains its core values haven’t altered, specifically regarding the use of lethal force, it highlights the fundamental problem of AI in warfare. Namely, diplomacy and policy move much slower than technological progress. It is hard to regulate a rapidly advancing technology. Additionally, international arms control over AI’s military application is also difficult. 

Historically, arms control deals don’t have a great track record. For example, the Washington Naval Conference of 1921-22 attempted to limit the size of warships to relieve growing postwar tensions. The treaty had some success but was abandoned by the mid-1930s. Similarly, the Nixon administration tried to ease tensions with the Soviet Union with the Strategic Arms Limitations Talks (SALT 1 & 2). SALT II was signed in 1979 but never ratified after the Soviet invasion of Afghanistan. History shows that enforcement and trust tend to be sticking points with arms control.

Given the current state of US-Russian relations, rising tensions with China, and the ongoing conflict in the Middle East, there probably isn’t much appetite for arms control treaties. Developing AI for military use is a risk but not developing it represents an even greater risk for the United States and its allies. 

Conclusion

Public Domain / Wikimedia Commons

Artificial intelligence has the potential to significantly alter how modern wars are fought. In asymmetric conflicts where quick decisions are needed to seize fleeting tactical opportunities, speeding up the decision-making process could help cut through the fog of war. Equally, the prospect of drone swarms could sharply reduce the costs and risks associated with more traditional weapon systems. A swarm of kamikaze drones worth a few thousand dollars would be a serious threat to a target worth tens or hundreds of millions of dollars. 

On the other hand, there are countermeasures already in development and there are serious ethical questions to consider. The Department of Defense insists lethal force will never be used autonomously but other regimes won’t be nearly so hesitant. Because the cost of entry is so much lower than conventional weapons, it won’t just be established powers that develop AI for military use. Arms control treaties don’t have much of a track record so it would be unwise to refuse to develop the capability to use lethal force autonomously on principle. Tough choices on the future of artificial intelligence lie ahead. 

 

Get Ready To Retire (Sponsored)

Start by taking a quick retirement quiz from SmartAsset that will match you with up to 3 financial advisors that serve your area and beyond in 5 minutes, or less.

Each advisor has been vetted by SmartAsset and is held to a fiduciary standard to act in your best interests.

Here’s how it works:
1. Answer SmartAsset advisor match quiz
2. Review your pre-screened matches at your leisure. Check out the advisors’ profiles.
3. Speak with advisors at no cost to you. Have an introductory call on the phone or introduction in person and choose whom to work with in the future

Get started right here.

Thank you for reading! Have some feedback for us?
Contact the 24/7 Wall St. editorial team.