The True Dangers of Trump’s Economic Plans
His Radical Agenda Would Wreak Havoc on American Businesses, Workers, and Consumers
In 1983, the U.S. military’s research and development arm began a ten-year, $1 billion machine intelligence program aimed at keeping the United States ahead of its technological rivals. From the start, computer scientists criticized the project as unrealistic. It promised big and ultimately failed hard in the eyes of the Pentagon, ushering in a long artificial intelligence (AI) “winter” during which potential funders, including the U.S. military, shied away from big investments in the field and abandoned promising areas of research.
Today, AI is once again the darling of the national security services. And once again, it risks sliding backward as a result of a destructive “hype cycle” in which overpromising conspires with inevitable setbacks to undermine the long-term success of a transformative new technology. Military powers around the world are investing heavily in AI, seeking battlefield and other security applications that might provide an advantage over potential adversaries. In the United States, there is a growing sense of urgency around AI, and rightly so. As former Secretary of Defense Mark Esper put it, “Those who are first to harness once-in-a-generation technologies often have a decisive advantage on the battlefield for years to come.” However, there is a very real risk that expectations are being set too high and that an unwillingness to tolerate failures will mean the United States squanders AI’s potential and falls behind its rivals.
The path to the effective uses of military AI will inevitably be rocky, with accidents and missteps amplified by overhyping of the sort that has seen, for example, outsize excitement over futuristic, fully autonomous cars give way to the mundane reality of vehicles that can just about parallel park. A mismatch between expectations and reality can spell the end for new technologies and is especially likely to do so when governments prioritize quick wins over long-term potential. Successful AI adoption will require patience and careful communication so that opportunistic naysayers cannot hold up accidents as proof that failure is inevitable.
AI might be especially vulnerable to this type of hype-induced backsliding. It has suffered two previous winters in its relatively short history, during which AI’s failure to live up to expectations drove declines in funding and interest. AI is also tricky to understand and to define, and the algorithms on which it is based are prone to failure when used outside the context of their initial development.
A mismatch between expectations and reality can spell the end for new technologies.
Incorporating AI into the U.S. military, moreover, will require disruptive changes to everything from force structure and promotion patterns to doctrine and responsibility. This will inevitably trigger resistance. And because U.S. defense officials generally lack the expertise to assess AI advances currently being driven by the private sector, opponents of the new technology will find it easier to capitalize on inevitable setbacks, arguing that a potentially effective application of AI is not just too early but will never materialize.
Yet the naysayers cannot be allowed to triumph. Over and over throughout history, resistance to technological change has come back to haunt militaries. In the late nineteenth century, for instance, France’s navy sought to counter British naval supremacy by investing heavily in submarines and torpedo boats. The technology of the time was not up to the task, however, and France reverted to building battleships, a move that left the United Kingdom to rule the waves until the outbreak of World War II.
About 25 years later, Russia abandoned early design armored vehicles because they got stuck in the mud. Better tread and more power were the simple fixes. But instead, Russia put off procurement of the vehicles and fell behind as others moved forward. In short order, the armored vehicle evolved into the tank, a critical innovation in ground warfare. The Russians, like the French, ought to have shown more patience and persistence.
There are three main chokepoints at which new technologies such as AI often fall short of inflated expectations and therefore backslide into the abyss of premature abandonment. The first is the “valley of death,” the appropriately named gap between a technology’s development in the private sector and its acquisition by governments or militaries, which often require more certainty than developers can offer. It is the burial ground for many great ideas.
If the valley of death is safely navigated, a cumbersome and timid testing and evaluation process can be the next trap. The U.S. Department of Defense in particular needs to invest in making testing, evaluation, verification, and validation of new AI applications more efficient. Existing processes are not designed to handle ever-evolving machine learning systems and AI algorithms, which may enable the deployment of unreliable systems. These, in turn, are more likely to fail, generating mistrust and resistance to further AI adoption.
The final hurdle is real-world deployment. Fear of the many unknowns—such as the perceived impact on soldiers—could strengthen resistance to the broad integration of AI over time, as could the perception that AI will reduce the human role in warfare. These are fertile grounds for those looking to seize upon AI errors in order to halt the technology’s use.
Both China and Russia are investing heavily in AI, in part because they hope to challenge the conventional military superiority of the United States. It would be a colossal mistake for Washington to allow the potential of this new technology to slip through its hands, just as it was a mistake for France and Russia to dismiss early submarines and armored vehicles. The United States must therefore find a balance between leaning too far forward, pushing technology before it is ready, and being too quick to abandon it when inevitable accidents occur.
In order to strike this balance, the U.S. government will need to set more realistic expectations about what AI can do for the military. It must counter the popular focus on the fantastical—lethal autonomous weapon systems and artificial general intelligence, for instance, remain closer to sci-fi than reality—with a carefully calibrated, well-informed, and realistic picture of what AI can actually do. At the same time, it must emphasize that AI will not replace humans but rather enhance their capabilities. Treating algorithms more like computing power and less like a magical substitute for human insight will make them more palatable to the public.
Popular acceptance must be matched by technical capacity within the U.S. government, which will mean modernizing Department of Defense infrastructure—both hardware and software. Legacy computing systems, for example, lack the common programming languages used in the private sector and are difficult to update, making them ill suited for AI adoption. Improved and transparent testing and evaluation procedures would make it easier to identify whether accidents involving AI are caused by immature technology that needs more time or by fundamentally flawed concepts that should be discarded.
And people will also need an update. The notable deficit of technical understanding and AI literacy among policymakers and government officials must be urgently addressed. This could involve attracting outsiders into the military or supporting insiders while expanding efforts, such as the Air Force’s Computer Language Initiative, that treat programming and computer languages similarly to how many agencies already treat foreign languages.
Failure to fully engage with the possibilities of AI would put U.S. national security at risk, potentially allowing adversaries to gain an edge in emerging military capabilities. If decision-makers confuse inevitable accidents with inherent problems, they will shy away from AI and lose out on its benefits. U.S. President Joe Biden’s administration has the opportunity to build on nascent investments over the last decade and set the United States firmly on a trajectory toward responsible military leadership in AI. Failure to do so means consigning the United States to runner-up status, or worse, in the competition for military superiority.