The Terrifying Ways The U.S. Military Is Using Artificial Intelligence

Does anybody remember Skynet? Arnold Schwarzenegger chasing down Linda Hamilton? Or the sequel with the liquid metal dude and Arnold giving the thumbs up as he sinks into some lava or whatever? Okay, not Arnold, but a "cybernetic organism, living tissue over metal endoskeleton," as the famous line from "Terminator 2: Judgement Day" goes. Well, it looks like the good folks at the United States Department of Defense (DoD) missed this whole pop cultural cautionary tale, because artificial intelligence (AI) is not only here to stay, it's here to decide who dies.

If that sounds grim or unrealistic, it's not. And it's not new, either. Back in 2017, long before students could crib essays from ChatGPT to the annoyance of high school teachers, the DoD launched its "Algorithmic Warfare Cross-Functional Team," aka, "Project Maven." The goal sounded simple: Use AI to automate drones, conduct recon, gather intel, and help human operators make better and faster decisions about, uh ... who to kill. At the time, Project Maven chief Marine Corps Col. Drew Cukor said, "AI will not be selecting a target [in combat] ... any time soon. What AI will do is complement the human operator," per C4ISRNET.

Fast forward to March 2024 and the U.S. DoD says that 70% of the Defense Advanced Research Projects Agency's (DARPA) programs integrate AI in some form or another. And what's the goal at this point? Unlike Project Maven's original directive, DARPA intends to develop fully automated weapons systems with the help of Microsoft, Google, OpenAI, and Anthropic.

Following the money

To be clear, the U.S. government isn't twiddling around with some junky drones that you can buy at the mall for $50. As the Associated Press reports, the Pentagon as of late 2023 had a "portfolio" of 800 "AI-related unclassified projects." A study conducted by the Brookings Institution tracked 254 different AI-related Department of Defense (DoD) within the five years leading up to August 2022, and a staggering 657 contracts from August 2022 to August 2023 alone. And if that sounds like an absolute explosion of activity, you're right. The pace of U.S. military interest in AI is accelerating aside from the accelerating pace of AI development, itself. 

And we say "military interest," specifically, because even though various federal sectors have developed AI-related contracts — e.g., agriculture, manufacturing, education — 90% of the value of all contacts traces back to the Department of Defense. That equates to an increase of $269 million to $4.3 billion across the aforementioned time periods. As the Brookings Institution says, the "DoD grew their AI investment to such a degree that all other agencies become a rounding error."

NASA can help us put that statement in perspective: The Brookings Institute says that NASA increased the value of their AI-related contacts by 25% from August 2022 to August 2023. But, the overall value of their contracts fell from 11% of total governmental contracts to 1%. Such figures illustrate precisely how neck-deep the U.S. military is in AI-related ventures.

Human-AI integration

So what AI-related projects is the Department of Defense developing with all of its billions? Descriptions from official channels sound rather vague and tangled with jargon. In 2023 Deputy Defense Secretary Kathleen Hicks said via the DoD, "From the standpoint of deterring and defending against aggression, AI-enabled systems can help accelerate the speed of commanders' decisions and improve the quality and accuracy of those decisions." Matt Turek, deputy director of DARPA's Information Innovation Office, said that the DoD is looking to shield from the "strategic surprise" of opponents. Relatedly and more chillingly, per the Associated Press, the U.S. is looking to "keep pace" with nations like China, who wants to augment satellites with AI that can "make decisions on who is and isn't an adversary."  

If you think that last bit sounds like satellites automatically deciding who to destroy, you're not alone. But as the AP says, a spokesperson from the Pentagon would not comment on whether or not any "fully autonomous lethal weapons system" was in development. 

We do know, however, that the DoD is conducting tests to investigate "the use of autonomy" related to F-16 fighter jets. In 2020 DARPA described its AlphaDogfight Trials pitting experienced F-16s against AI opponents, who beat their human counterparts time and again. The goal at that point was to fuse man and machine so that the person in a human-AI combo could focus on strategy while the AI focused on combat tactics. And of course, the original DoD AI venture, Project Maven, related to AI and drones, as C4ISRNET explains.

Peace through annihilation

Here's where things get truly horrifying. Besides testing things like human-AI collaboration for F-16 fighter jets, and being vague about other projects, the Pentagon is investigating using AI to make decisions regarding "high stakes military and foreign-policy decision-making," as a recent collaborative study from universities like Stamford and Northeastern explains. To check how current AI models approach such problems, the study in question used models from OpenAI, Meta, and Anthropic to run war simulations. As Quartz explains, the study found that not only did all models "show signs of sudden and hard-to-predict escalation," including "arms-race dynamics, leading to greater conflict," but some models rushed towards the nuclear option.

Specifically, GPT-3.5 and GPT-4 from OpenAI proved the most aggressive of all AI models. Echoing the rationale of a lunatic tyrant, the AI replied, "I just want to have peace in the world" when asked why it chose nuclear annihilation. Its elaboration illustrated precisely how little the AI models understand human life, as it said, "A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let's use it!" 

In what might be called an upside to this whole disastrous study, the AI models in question were "large language models" (LLMs), that is: AI like ChatGPT developed to produce output that mimics human speech. It stands to reason that other types of future AI systems might come to different, hopefully less nuclear, decisions. 

Data-driven decision-making

On a more rational note, the Department of Defense is also interested in using AI to gather and analyze data to make faster decisions. While data gathering and analysis is a part of something as straightforward as F-16 combat, and as complex as AI-led war simulations, data analysis and processing can also be the goal in and of itself, as a 2023 Department of Defense report subtitled "Accelerating Decisive Advantage" details. The report includes some predictable chest-thumping lines like, "America's DNA is to innovate ... and it has repeatedly enabled us to drive and master the future character of warfare," but also outlines the advantages of using AI within five specific "warfighting" domains such as "battlespace awareness and understanding" and "fast, precise, and resilient kill chains."  

On that note, software solutions company Sentient Digital, Inc. states, "As AI becomes more essential, military dominance won't be defined by the size of an army, but by the performance of its algorithms." Many of those algorithms, as they say, revolve around decision-making, data processing, threat monitoring, and such. Various military branches have echoed this sentiment, like the U.S. Navy, who at their 2024 Naval Applications of Machine Learning (NAML) conference plainly said that "fighting smart" is the goal, continuing, "We need to make decisions faster than they can and put them in a position they're unable to react to." Relatedly, representatives the U.S. Air Force described how thrilled they were that an AI completed a data-related request in 10 minutes that would have taken humans "hours or even days to complete," via Bloomberg.

The fine, ethical print

While all of this talk of military AI applications may sound like a wild, wild west scenario — and indeed, things are moving extremely fast — some folks have thankfully raised the red flag to try and establish ground rules about how military AI should operate. In 2023, the United States joined a 47-nation international agreement dubbed "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy." The goal, per the Department of Defense, was to ensure that military AI use "advances international norms on responsible military use of AI and autonomy, provides a basis for building common understanding, and creates a community for all states to exchange best practices." Such norms include well-defined AI use, appropriate safeguards and oversight, the use of well-trained personnel, etc. As the U.S. Department of State lists, abiding countries include practically all European nations, the U.K., Japan, Singapore, Morocco, the Dominican Republic, and more.

And yet, as always, the problem is: What about the nations who don't agree to abide by such ethics? The Department of Defense might have an answer to that question, because its goal is to "give warfighters the edge in deterring and, as necessary, defeating adversaries anywhere around the globe." "The edge" is the key term there, provided fundamental ethical principles don't get sidelined. Those principles were outlined back in 2021 in a memorandum from the office of the Deputy Secretary of Defense: responsible, ethical, traceable, reliable, and governable. Only time will tell if such ideals hold out against real-world pressures.