Military

The Biggest Problems with AI in Warfare

Photo by John Moore / Getty Images

The advent of artificial intelligence (AI) presents great opportunity and risk. This is as true for the private sector as it is for the military. AI offers many military advantages but comes with severe problems that must be considered. This article examines the main problems with using AI in warfare and what can be done about them.

Why This Matters

US pentagon | The Pentagon
Digital Vision. / Photodisc via Getty Images

“The one who becomes the leader in this sphere (AI) will be the ruler of the world” was Russian President Vladimir Putin’s rather ominous verdict in 2017. AI has advanced considerably since then, and China, Russia, and the United States all have invested heavily in their respective programs. AI in warfare is not some distant future problem but a contemporary phenomenon that demands our fullest attention.

Detection and Loopholes

Timor+Leste+military | Marines conduct amphibious assault training.
Official U.S. Navy Page from United States of America, Public domain, via Wikimedia Commons

As formidable as AI could be in warfare, training exercises still expose major shortcomings. US Marines had a grand old time fooling an advanced security AI as part of Squad X, a program developed by the Defense Advanced Research Projects Agency (DARPA). Eight marines tested the new system by walking in front of it for six days. Once the machine had “learned” to detect a person, it was put to the test on day seven. Eight marines attempted to bypass the sensor. 

All eight marines succeeded by employing imaginative methods to fool the AI system. One giggling pair made it hiding in a cardboard box. Another disguised himself with a tree, and another pair somersaulted all the way up to the sensor without triggering it. The experiment showed how easily an algorithm can be bypassed by thinking outside the box (or inside it, in this case). Similarly, the highly advanced and expensive “smart walls” at the border are still easily flummoxed by savvy smugglers aware of the technology’s limitations. 

 AI will easily outperform humans in specific, closed tasks, but human ingenuity still holds the edge in open-ended missions. An AI trained with a particular data set will be quite useless when confronted with an entirely new reality. It is only as good as the data fed to it. 

Ethics

koto_feja / Getty Images

One of AI’s most alarming prospects in warfare is the use of lethal force. This is hardly a new concern; international agencies warned against “killer robots” in 2012. As the previous section explained, AI already has trouble identifying humans. Differentiating between a civilian and a combatant would be even more difficult. An automated weapon system would have no compassion or moral qualms. It would simply act without hesitation. 

Compounding this problem is the question of who would bear responsibility if an AI carried out a war crime. If, whether by mistake or intentionally, an autonomous drone killed a family in a war zone, who would be held to account? 

While the United States has clear guidelines on human oversight, other nations or even non-state actors may not follow the same rules. 

Regulation

Iran Shahed-136 | Iranian military unmanned aerial vehicle at sunset. Combat drone
Anton Petrus / Moment via Getty Images

Regulating militarized AI poses practical problems in addition to a myriad of ethical concerns. One is simply the speed of technological progress. Just as in the private sector, AI advances faster than lawmakers can legislate for. This is a particularly acute problem in countries where passing legislation is a lengthy process, and less so in autocratic regimes that are typically adversaries of democracies.

Another problem is that regulation via international law requires signatories to act in good faith. History shows this approach seldom works out. In the 1920s, global powers tried to ease tensions by restricting the size of new warships. While initially successful, the resultant treaty lacked teeth and was ultimately jettisoned in the 1930s. Similarly, the Nixon administration tried to ease tensions with the Soviet Union with the Strategic Arms Limitations Talks (SALT 1 & 2). SALT II was signed in 1979 but never ratified after the Soviet invasion of Afghanistan. 

Deteriorating relations between the United States and Russia and rising tensions with China complicate the current situation. There may be too little trust between world powers for comprehensive AI regulation to work. Additionally, there are plenty of non-state actors in the Middle East who would almost certainly have no hesitation to deploy autonomous weapons indiscriminately. 

Trust

thejointstaff / Flickr

One of the last but crucial barriers to adopting AI for military use is institutional. Simply put, the military itself needs to be convinced. In October 2023, the Space Force paused using generative AI over security concerns. The Navy is similarly unimpressed with AI’s current state for military use. Though it can sift through enormous quantities of data in seconds, AI cannot adequately explain its reasoning to a human operator to feel confident using its recommendations.

Conclusion

Public Domain / Wikimedia Commons

AI offers several military advantages, but these come with some major shortcomings. As experiments have shown, humans can still outwit machines in open-ended environments. AI will always outperform humans under certain constraints but cannot yet compete with humankind’s ability to improvise. Ethics are perhaps the biggest concern of all. AI will perform its task without hesitation, but in the wrong hands, it will have no restraint in carrying out atrocities that even the most hard-hearted person would refuse to do. Similarly, just keeping up with the latest developments will be tough for lawmakers, and diplomacy will be fraught with mutual mistrust. An AI arms race is inevitable.

Finally, soldiers on the ground will have trouble trusting AI with their lives. Trust is difficult to build and all too easy to break. While humans have made and will make colossal blunders in command, the first instance of AI performing erroneously in war may irreparably damage its reputation. For all the great possibilities AI promises, those may pale compared to the new problems they bring to modern warfare. Yet, not developing AI for the military presents an even greater danger.

 

 

Want to Retire Early? Start Here (Sponsor)

Want retirement to come a few years earlier than you’d planned? Or are you ready to retire now, but want an extra set of eyes on your finances?

Now you can speak with up to 3 financial experts in your area for FREE. By simply clicking here you can begin to match with financial professionals who can help you build your plan to retire early. And the best part? The first conversation with them is free.

Click here to match with up to 3 financial pros who would be excited to help you make financial decisions.

Thank you for reading! Have some feedback for us?
Contact the 24/7 Wall St. editorial team.