The True Dangers of Trump’s Economic Plans
His Radical Agenda Would Wreak Havoc on American Businesses, Workers, and Consumers
Gunpowder. The combustion engine. The airplane. These are just some of the technologies that have forever changed the face of warfare. Now, the world is experiencing another transformation that could redefine military strength: the development of artificial intelligence (AI).
Merging AI with warfare may sound like science fiction, but AI is at the center of nearly all advances in defense technology today. It will shape how militaries recruit and train soldiers, how they deploy forces, and how they fight. China, Germany, Israel, and the United States have all used AI to create real-time visualizations of active battlefields. Russia has deployed AI to make deepfake videos and spread disinformation about its invasion of Ukraine. As the Russian-Ukrainian war continues, both parties could use algorithms to analyze large swaths of open-source data coming from social media and the battlefield, allowing them to better calibrate their attacks.
The United States is the world’s preeminent technological powerhouse, and in theory, the rise of AI presents the U.S. military with huge opportunities. But as of now, it is posing risks. Leading militaries often grow overconfident in their ability to win future wars, and there are signs that the U.S. Department of Defense could be falling victim to complacency. Although senior U.S. defense leaders have spent decades talking up the importance of emerging technologies, including AI and autonomous systems, action on the ground has been painfully slow. For example, the U.S. Air Force and the U.S. Navy joined forces starting in 2003 to create the X-45 and X-47A prototypes: semiautonomous, stealthy uncrewed aircraft capable of conducting surveillance and military strikes. But many military leaders viewed them as threats to the F-35 fighter jet, and the air force dropped out of the program. The navy then funded an even more impressive prototype, the X-47B, able to fly as precisely as human-piloted craft. But the navy, too, saw the prototypes as threats to crewed planes and eventually backed away, instead moving forward with an unarmed, uncrewed aircraft with far more limited capabilities.
The United States’ slow action stands in stark contrast to the behavior of China, Washington’s most powerful geopolitical rival. Over the last few years, China has invested roughly the same amount as the United States has in AI research and development, but it is more aggressively integrating the technology into its military strategy, planning, and systems—potentially to defeat the United States in a future war. It has developed an advanced, semiautonomous weaponized drone that it is integrating into its forces—unlike how Washington dropped the X-45, the X-47A, and the X-47B. Russia is also developing AI-enabled military technology that could threaten opposing forces and critical infrastructure (so far absent from its campaign against Ukraine). Unless Washington does more to integrate AI into its military, it may find itself outgunned.
AI is at the center of nearly all advances in defense technology today.
But although falling behind on AI could jeopardize U.S. power, speeding ahead is not without risks. There are analysts and developers who fear that AI advancements could lead to serious accidents, including algorithmic malfunctions that could cause civilian casualties on the battlefield. There are experts who have even suggested that incorporating machine intelligence into nuclear command and control could make nuclear accidents more likely. This is unlikely—most nuclear powers seem to recognize the danger of mixing AI with launch systems—and right now, Washington’s biggest concern should be that it is moving too slowly. But some of the world’s leading researchers believe that the Defense Department is ignoring safety and reliability issues associated with AI, and the Pentagon must take their concerns seriously. Successfully capitalizing on AI requires the U.S. military to innovate at a pace that is both safe and fast, a task far easier said than done.
The Biden administration is taking positive steps toward this goal. It created the National Artificial Intelligence Research Resource Task Force, which is charged with spreading access to research tools that will help promote AI innovation for both the military and the overall economy. It has also created the position of chief digital and artificial intelligence officer in the Department of Defense; that officer will be tasked with ensuring that the Pentagon scales up and expedites its AI efforts.
But if the White House wants to move with responsible speed, it must take further measures. Washington will need to focus on making sure researchers have access to better—and more—Department of Defense data, which will fuel effective algorithms. The Pentagon must reorganize itself so that its agencies can easily collaborate and share their findings. It should also create incentives to attract more STEM talent, and it must make sure its personnel know they won’t be penalized if their experiments fail. At the same time, the Department of Defense should run successful projects through a gauntlet of rigorous safety testing before it implements them. That way, the United States can rapidly develop a panoply of new AI tools without worrying that they will create needless danger.
Technological innovation has long been critical to the United States’ military success. During the American Civil War, U.S. President Abraham Lincoln used the North’s impressive telegraph system to communicate with his generals, coordinate strategy, and move troops, helping the Union defeat the Confederacy. In the early 1990s, Washington deployed new, precision-guided munitions in the Persian Gulf War to drive Iraq out of Kuwait.
But history shows that military innovation is not simply the process of creating and using new technology. Instead, it entails reworking how states recruit troops, organize their militaries, plan operations, and strategize. In the 1920s and 1930s, for instance, France and Germany both developed tanks, trucks, and airpower. During World War II, Germany used the combined potential of these innovations (along with the radio) to carry out its infamous blitzkriegs: aggressive offensive strikes that quickly overwhelmed its enemies. France, by contrast, invested most of its resources in the Maginot Line, a series of forts along the French-German border. French leaders believed they had created an impenetrable boundary that would hold off any attempted German invasion. Instead, the Nazis simply maneuvered around the line by going through Belgium and the Ardennes forest. With its best units concentrated elsewhere, poor communication, and outdated plans for how to fight, France swiftly fell.
It is not a coincidence that France didn’t gamble with new military systems. France was a World War I victor, and leading military powers often forgo innovation and resist disruptive change. In 1918, the British navy invented the first aircraft carrier, but the world’s then dominant sea power treated these ships mostly as spotters for its traditional battleships rather than as mobile bases for conducting offensives. Japan, by contrast, used its aircraft carriers to bring attack planes directly to its fights. As a result, the British navy struggled against the Japanese in the Pacific, and ultimately, Japan had to be pushed back by another rising power: the United States. Before and throughout World War II, the U.S. Navy experimented with new technology, including aircraft carriers, in ways that helped it become the decisive force in the Atlantic and the Pacific.
Leading military powers often forgo innovation and resist change.
But today, the United States risks being more like the United Kingdom—or even France. The Defense Department appears to be biased in favor of tried-and-true capabilities over new tools, and its pace of innovation has slowed: the time it takes to move new technology from the lab and to the battlefield went from roughly five years, on average, in the early 1960s to a decade or more today. Sometimes, the Pentagon has seemingly dragged its feet on AI and autonomous systems because it fears that adopting those technologies could require disruptive changes that would threaten existing, successful parts of the armed forces, as the story of the X-45, the X-47A, and the X-47B clearly illustrates. Some projects have struggled to even make it off the drawing board. Multiple experiments have shown that Loyal Wingman, an uncrewed aircraft that employs AI, can help aircraft groups better coordinate their attacks. But the U.S. military has yet to seriously implement this technology, even though it has existed for years. It’s no wonder that the National Security Commission on Artificial Intelligence concluded in 2021, in its final report, that the United States “is not prepared to defend or compete in the AI era.”
If the United States fails to develop effective AI, it could find itself at the mercy of increasingly sophisticated adversaries. China, for example, is already employing AI to war-game a future conflict over Taiwan. Beijing plans to use AI in combination with cyberweapons, electronic warfare, and robotics to make an amphibious assault on Taiwan more likely to succeed. It is investing in AI-enabled systems to track undersea vehicles and U.S. Navy ships and to develop the ability to launch swarm attacks with low-cost, high-volume aircraft. If the United States lacks advanced AI capabilities, it will find itself inevitably moving at a slower pace—and would therefore be less able to help Taiwan fend off an invasion.
Given the stakes, the defense establishment is right to worry about Washington’s torpid pace of defense innovation. But outside the government, many analysts have the opposite fear: if the military moves too quickly as it develops AI weaponry, the world could experience deadly—and perhaps even catastrophic—accidents.
It doesn’t take an expert to see the risks of AI: killer robots have been a staple of pop culture for decades. But science fiction isn’t the best indicator of the actual dangers. Fully autonomous, Terminator-style weapons systems would require high-level machine intelligence, which even optimistic forecasts suggest is more than half a century away. One group of analysts made a movie about “Slaughterbots,” swarms of autonomous systems that could kill on a mass scale. But any government or nonstate actor looking to wreak that level of havoc could accomplish the same task more reliably, and cheaply, using traditional weapons. Instead, the danger of AI stems from deploying algorithmic systems, both on and off the battlefield, in a manner that can lead to accidents, malfunctions, or even unintended escalation. Algorithms are designed to be fast and decisive, which can cause mistakes in situations that call for careful (if quick) consideration. For example, in 2003, an MIM-104 Patriot surface-to-air missile’s automated system misidentified a friendly aircraft as an adversary, and human operators did not correct it, leading to the death by friendly fire of a U.S. F-18 pilot. Research shows that the more cognitively demanding and stressful a situation is, the more likely people are to defer to AI judgments. That means that in a battlefield environment where many military systems are automated, these kinds of accidents could multiply.
Humans, of course, make fatal errors as well, and trusting AI may not seem inherently to be a mistake. But people can be overconfident about the accuracy of machines. In reality, even very good AI algorithms could potentially be more accident-prone than humans. People are capable of considering nuance and context when they are making decisions, whereas AI algorithms are trained to render clear verdicts and work under specific sets of circumstances. If entrusted to launch missiles or employ air defense systems outside their normal operating parameters, AI systems might destructively malfunction and launch unintended strikes. It could then be difficult for the attacking country to convince its opponent that the strikes were a mistake. Depending on the size and scale of the error, the ultimate outcome could be a ballooning conflict.
This has frightening implications. AI-enabled machines are unlikely to ever be given the power to actually launch nuclear attacks, but algorithms could eventually make recommendations to policymakers about whether to launch a weapon in response to an alert from an early warning air defense system. If AI gave the green light, the soldiers supervising and double-checking these machines might not be able to adequately examine their outputs and monitor the machines for potential errors in the input data, especially if the situation was moving extremely quickly. The result could be the inverse of an infamous 1983 incident in which a Soviet air force lieutenant arguably saved the world when, correctly suspecting a false alarm, he decided to override a nuclear launch directive from an automated warning system. That system had mistaken light reflecting off of clouds for an inbound ballistic missile.
The United States, then, faces dueling risks from AI. If it moves too slowly, Washington could be overtaken by its competitors, jeopardizing national security. But if it moves too fast, it may compromise on safety and build AI systems that breed deadly accidents. Although the former is a larger risk than the latter, it is critical that the United States take safety concerns seriously. To be effective, AI must be safe and reliable.
So how can Washington find a sort of Goldilocks zone for innovation? It can start by thinking of technological development in terms of three phases: invention, incubation, and implementation. Different speeds are appropriate for each one. There is little harm from moving quickly in the first two phases, and the U.S. military should swiftly develop and experiment with new technologies and operational concepts. But it will need to thoroughly address safety and reliability concerns during implementation.
To strike this balance, the U.S. military will need to make sure its personnel get a better handle on all of the Department of Defense’s data. That includes open-source content available on the Internet, such as satellite imagery, and intelligence on adversaries and their military capabilities. It also includes data on the effectiveness, composition, and capabilities of the U.S. military’s own tools.
The Department of Defense already has many units that collect such data, but each unit’s information is siloed and stored in different ways. To more effectively adopt AI, the Pentagon will need to build on its ongoing efforts to create a common data infrastructure. The department is taking an important step by integrating its data and AI responsibilities under the aegis of the chief digital and artificial intelligence officer. But this reorganization will not succeed unless the new official has the authority to overcome bureaucratic barriers to AI adoption in both the military services and other parts of the Pentagon.
Giving researchers better data will also help ensure that every algorithm undergoes rigorous safety testing. Examiners, for example, could deliberately feed a wide range of complex or outright incorrect information into an AI system to see if it creates a faulty output— such as a directive to strike a friendly aircraft. This testing will help create a baseline idea of how reliable and accurate AI systems are, establishing a margin of error that eventual operators can keep in mind. This will help humans know when to question what machines tell them, even in high-pressure scenarios.
Manufacturing innovative and secure AI will also require a tighter connection between the Department of Defense’s Research and Engineering arm and the rest of the Pentagon. In theory, Research and Engineering is in charge of the department’s technological innovation. But according to a report by Melissa Flagg and Jack Corrigan at the Center for Security and Emerging Technology, the Pentagon’s innovation efforts are disorganized, taking place across at least 28 organizations within the broader department. These efforts would all benefit from more coordination, something the Research and Engineering arm can provide. One recent reason for optimism is that Research and Engineering recently created the Rapid Defense Experimentation Reserve, an initiative that will allow the department to more quickly create prototypes and experiment with emerging technologies in high-need areas across the military, which should increase coordination and speed up adoption.
Military power is more about people and organizations than widgets or tools.
But the Pentagon can’t spur more effective innovation solely through structural reforms. It will need the right people, as well. The United States is fortunate to have a highly trained and educated military, yet it requires even more STEM talent if it is going to win the wars of the future. That means the Department of Defense must hire more personnel who study AI. It also means the Pentagon should offer coding and data analytics courses for existing staff and give extra cash or more time off to employees who enroll—just as it does for personnel who study foreign languages.
As part of its overhaul, the Defense Department will also need to change its culture so that it is not, as Michèle Flournoy, former undersecretary of defense for policy, described it in these pages last year, too “risk averse.” Currently, department officials often slow-walk or avoid risky initiatives to avoid the reputational damage that accompanies failure, burying promising projects in the process. This is completely backward: trial and error is integral to innovation. Senior leaders in the Pentagon should reward program managers and researchers for the overall number of experiments and operational concepts they test rather than the percentage that are successful.
Even unsuccessful investments can prove strategically useful. The Chinese military pays close attention to U.S. military capabilities and planning, allowing the United States to potentially disrupt Beijing’s own planning by selectively revealing prototypes, including ones that did not pan out. China might respond by chasing sometimes flawed U.S. systems, while being uncertain about what the United States will actually deploy or develop next. If the U.S. military wants to remain the world’s strongest, it must continue making its adversaries follow it around.
It will also need to develop ways to effectively use whatever technologies it does decide to deploy. Military power is ultimately more about people and organizations than widgets or tools, and history shows that even the most successful militaries need to incorporate new capabilities into their plans if they want to win on the battlefield. As conventional warfare makes an unfortunate comeback, the United States will need to adapt and restructure its military for the future—rather than resting on its laurels.