The True Dangers of Trump’s Economic Plans
His Radical Agenda Would Wreak Havoc on American Businesses, Workers, and Consumers
It has been a challenging year for U.S. cyberdefense operations. A dramatic surge in ransomware attacks has targeted such critical national infrastructure as the Colonial Pipeline—which was shut down for six days in May, disrupting fuel supplies to 17 states—and halted the operation of thousands of American schools, businesses, and hospitals. The hacking of SolarWinds Orion software, which compromised the data of hundreds of major companies and government agencies and went undiscovered for at least eight months, demonstrated that even the best-resourced organizations remain vulnerable to malign actors.
While the motley crew of cybercriminals and state-sponsored hackers who constitute the offense has not yet widely adopted artificial intelligence techniques, many AI capabilities are accessible with few restrictions. If traditional cyberattacks begin to lose their effectiveness, the offense won’t hesitate to reach for AI-enabled ones to restore its advantage—evoking worst-case future scenarios in which AI-enabled agents move autonomously through networks, finding and exploiting vulnerabilities at unprecedented speed. Indeed, some of the most damaging global cyberattacks, such as the 2017 NotPetya attack, incorporated automated techniques, just not AI ones. These approaches rely on prescriptive, rules-based techniques, and lack the ability to adjust tactics on the fly, but can be considered the precursors of fully automated, “intelligent” agent–led attacks.
Yet it is not just cyberattackers who stand to benefit from AI. Machine learning and other AI techniques are beginning to bolster cyberdefense efforts as well, although not yet at the scale necessary to alter the advantage the offense presently enjoys. There is reason to hope that AI will become a game-changer for the defense. As offense and defense both race to leverage AI techniques, the question is which side will manage to benefit most.
There is currently limited evidence that hackers have begun making significant use of AI techniques. This is not particularly surprising. Current techniques are highly effective, and adding AI to the mix could be an unnecessary complication. And while cyber and AI skill sets overlap to some degree, they are distinct enough that additional expertise is required to build and integrate AI techniques for cyberhacking.
If cyberdefenses improve sufficiently, however, the offense may be forced to explore new approaches. An individual or organization could also develop AI-enhanced cyber-tools that are simple to use, reducing the cost and level of expertise required to apply them. There is precedent for this. Some hackers, after discovering a vulnerability, sometimes release proof-of-concept code that is quickly weaponized and diffused through the hacker community. Given the open nature of AI research, there is little to prevent a similar diffusion of AI-enhanced cyberattack tools.
For the defense, machine learning is already benefiting specific cybersecurity tasks. A strength of machine learning is its ability to recognize patterns in large data sets. Algorithms similar to the ones that classify objects or recommend online purchases can be employed to detect suspicious activity on networks. The application of machine-learning techniques to traditional intrusion detection systems has undoubtedly already helped to thwart many attacks. For the rather mundane task of spam email detection, for example, machine learning has offered qualitative improvements. More recently, deep-learning facial recognition algorithms have allowed the authentication of users on their mobile devices, mitigating the long-standing cybersecurity problem of weak passwords or personal identification numbers.
Vulnerable software and stolen user credentials are the basis of many cyberattacks. Intruders gain a foothold, exploit newly discovered or known vulnerabilities, and repeat the process. For this reason, two areas stand out as important targets for greater research and development investment: automated vulnerability discovery and AI-enabled autonomous cybersecurity.
Discovering vulnerabilities has long been an important part of the software development process. Attempts to automate this process are more recent. In 2016, the Defense Advanced Research Projects Agency (DARPA) hosted the Cyber Grand Challenge, a competition aimed at building fully automated systems capable of detecting and patching vulnerabilities in real time. Although the winning team, Mayhem, demonstrated this potential, they relied heavily on traditional vulnerability discovery tools fine-tuned for the competition; relatively little machine learning was involved. The open question is whether a broader application of AI could discernibly improve current techniques.
Autonomous cyber-agents that roam networks and launch attacks would be devastating if realized.
Many software applications, and operating systems in particular, contain millions of lines of code, making it difficult to detect every potential vulnerability. Automated techniques can help, in the same way that spelling and grammar checkers might help find errors in a long novel, but invariably a skilled human editor must scan every sentence. What has not yet occurred at any scale is the application of AI techniques to remove some of the cognitive workload or to improve upon existing capabilities.
While the Cyber Grand Challenge was taking place, another competition was building toward a famous showdown between human and machine. In this second contest, DeepMind, a subsidiary of Google, began building a system to play Go against the world’s best players. The game of Go is notable for having far more potential moves than its Western counterpart, chess. DeepMind’s system, AlphaGo, was in some ways a demonstration of AI’s potential for broader applications. More recently, DeepMind applied AI techniques to the Critical Assessment of Structure Prediction (CASP) competition to determine a protein’s underlying structure. As one prominent researcher described DeepMind’s breakthrough AI program, “This will change medicine. It will change research. It will change bioengineering. It will change everything.” While AlphaGo was intriguing, AlphaFold provided a means to better understand one of the key building blocks of the human body. Successes such as these across unrelated fields raise hopes that AI may prove similarly useful in the complex endeavor of cyberdefense.
However, as impressive as these new AI applications have been, there are no guarantees that it will be possible to develop autonomous cyberdefense agents. For the defense, a potentially potent tool has proved hard to implement across the entire range of cybersecurity-related tasks: threat identification, protection, detection, response, and recovery. Instead, it has been applied more narrowly to specific tasks such as intrusion detection. However, the technology’s potential is too important to ignore. In particular, the threat of autonomous cyber-agents that roam a network, probing for weaknesses and launching attacks, could be devastating if fully realized. Theoretically, attackers could launch thousands or more of these agents at once, wreaking havoc on critical infrastructure and businesses.
Will AI techniques enable even more devastating cyberattacks or will they revolutionize cyberdefense? Examining the evolution of cyberdefense operations over the last 30 years provides some clues as to how the integration of cyber and AI may evolve.
First, new attack tools will continue to disperse rapidly. The cyber-operations field has traditionally encouraged exploration and experimentation. Many early hacking efforts were simply attempts to bypass controls in order to gain access to more computing resources. Soon, the techniques pioneered by these early hackers entered the mainstream, becoming accessible to users with far more limited expertise. The rapid diffusion of new hacking methods continues to this day. Part of today’s cyber-challenge is that attackers with limited skills can wreak havoc on organizations with sophisticated tools they likely do not fully understand.
As new AI tools are developed, they are likely to quickly become available in the same manner. Relatively few people who launch deepfakes understand the underlying AI technology, but thanks to the availability of simple online tools, they can create synthetic video or audio with a few clicks of a mouse. Once offensive AI capabilities are developed, even moderately skilled hackers would be able to leverage them in their attacks.
Cyber has proved to be an asymmetric tool of statecraft. China has leveraged cyber to engage in the mass theft of intellectual property, thereby shortening the time frame to acquire and use key technologies. Russia has used cyber-operations coupled with disinformation campaigns to disrupt the U.S. political process. For cybercriminals, ransomware paired with cryptocurrencies offers an enormously profitable enterprise. If current techniques are foiled, there should be little doubt that states and criminal enterprises will turn to AI. The main reason they still have not leveraged AI capabilities is that they are not yet essential; simpler tools can still do the job.
China has signaled its intent to become the world leader in AI and has been making significant investments in cyber. Since DARPA’s Cyber Grand Challenge, for example, China has held over a dozen competitions focused on automated vulnerability discovery. This dual-use technology can help secure networks or provide new tools to state-sponsored hackers. China has invested heavily in the education of its cyber-workforce and in the development of a domestic semiconductor industry, and it has repeatedly shown its intent to match U.S. progress in AI. For example, it claims that its new large language model, Wu Dao, is more than ten times as large as GPT-3, the current standard for English-content creation. All of this suggests that China will attempt to leverage new technologies to pursue state policy whenever it can.
AI as applied to cyber will be driven by two different imperatives. For the offensive minded, AI tools will be designed to achieve maximum impact. Thus, the offense will seek tools that move fast, gain entry, and accomplish objectives. As NotPetya and other cyberattacks have demonstrated, attackers are often less concerned with controlling their tools than they are with achieving their intended effect. The defense is significantly more constrained. Its priority is defeating attacks while trying to keep networks operational, often at levels above 99 percent. This is a much higher bar. As new capabilities are developed, defenders’ fear of disrupting service may make them reluctant to deploy fully autonomous AI agents, placing them at a disadvantage to their adversaries. Defenders will have to weigh the potential risks of an attack against the impact of shutting down essential services. However, if the offense has fully leveraged AI, greater autonomy for defensive agents may be the only choice.
For this reason, a worst-case scenario involves fully autonomous AI cyber-agents. These capabilities differ from present-day autonomous attacks in that offensive cyber-agents would not be reliant on a set of explicit, preprogrammed instructions to guide their activity. Instead, they would be able to adjust their operations in real time, without additional human intervention, based upon the conditions and the opportunities they encounter. Conceivably, these agents could be given an objective without being told how to achieve it.
In such a scenario, two of the biggest concerns are speed and control. Theoretically, intelligent agents would be able to move through networks at machine speed. Defensive detection-and-response activities that relied upon human operators would be helpless to react. Perhaps even more concerning is the question of control: Can the effects of an attack be contained after launch? Research has already shown that machine-learning systems can operate in bizarre and unpredictable ways as a result of poorly specified objectives. Previous cyberattacks, starting with the Morris worm in the late 1980s, have occasionally had far greater reach and caused more damage than their creators ever intended. Intelligent agents could be even more devastating, devising novel attack vectors much the way AlphaZero developed new game strategies previously undiscovered by human players. Guarding against even more devastating attacks is just one reason to begin defensive preparations now. Another is that the development of an AI-enabled defensive agent that roams the network looking for illegitimate activity could finally start to tilt the field in favor of the defense.
Cyberattacks are already a significant geopolitical threat. Adding AI to this mix makes an already potent tool even more so. Therefore, it is important for network defenders and AI developers to begin working together to develop new defensive AI cyber-capabilities.
First, researchers need realistic data sets and network simulations to help the AI systems they build differentiate between threats and normal activity. Much of the available data relies upon public research and development dating from the late 1990s and is not representative of current threats. Armed with more current data sets and simulations, AI developers can begin to explore algorithmic approaches that could enable cyber-agents to detect incursions and, at a minimum, take rudimentary defensive measures.
Second, the federal government should prioritize research and development into a full spectrum of AI-enabled cyberdefenses. As part of these efforts, it should also significantly increase the use of cybersecurity competitions that focus on the development of new AI-enabled security tools. These competitions would differ from those that presently identify and reward the most capable human defender teams. Ideally, these contests would be public and the results would be broadly shared. Organizers should design competitions around specific cybersecurity challenges such as the detection of novel attacks or the identification of anomalous activity. Improvements in automated vulnerability discovery could make software more secure from the outset and find vulnerabilities in software that is already deployed.
Competitions should be organized with enough frequency to encourage continued innovation. The 2016 Cyber Grand Challenge undoubtedly helped improve automated vulnerability discovery, but it was held only once. Biyearly competitions with lucrative prizes and a path to government acquisition would incentivize sustained innovation and could uncover promising new techniques. Although Congress has authorized federal agencies to conduct these competitions, their use is sporadic and limited. Conducting more frequent competitions around specific cybersecurity challenges would encourage sustained innovation. The CASP competition for protein folding, which has been held every two years for over 25 years, offers a useful model. Sometimes strategic patience is needed for major breakthroughs.
The impact of uncontrolled cyberattacks is becoming ever more costly. It is past time for the United States to explore the potential for AI to improve its cyberdefenses to better protect critical infrastructure providers and state and local governments. There is little reason to believe that strategic competitors will not turn to AI-enabled attacks if traditional techniques lose their effectiveness. The stakes are high, and AI techniques are a double-edged sword. The United States must commit the resources to ensure that it is the defense that benefits.