In cybersecurity, a penetration test is a simulated attack on a computer system’s defenses that uses the tools and techniques an adversary would employ. Such tests are used by all kinds of governments and companies. Banks, for example, regularly hire computer experts to break into their systems and transfer money to unauthorized accounts, often by phishing for login credentials from employees. After the testers succeed, they present their findings to the institutions and make recommendations about how to improve security.

At the end of the last decade and the beginning of this one, human society itself was subject to a kind of penetration test: COVID-19. The virus, an unthinking adversary, probed the world’s ability to defend against new pathogens. And by the end of the test, it was clear that humanity had failed. COVID-19 went everywhere, from remote Antarctic research stations to isolated Amazonian tribes. It raged through nursing homes and aircraft carriers. As it spread, it leveled the vulnerable and the powerful—frontline workers and heads of state alike. The draconian lockdowns imposed by autocracies and the miraculous vaccines developed by democracies slowed, but did not halt, the virus’s spread. By the end of 2022, three of every four Americans had been infected at least once. In the six weeks after China ended its “zero COVID” restrictions in December, over one billion of the country’s people were infected. The primary reason for the pandemic’s relatively modest death toll was not that society had controlled the disease. It was the fact that viral infection proved to be only modestly lethal. In the end, COVID-19 mostly burned itself out.

Humanity’s failure against COVID-19 is sobering, because the world is facing a growing number of biological threats. Some of them, such as avian flu, come from nature. But plenty come from scientific advances. Over the past 60 years, researchers have developed sophisticated understandings of both molecular and human biology, allowing for the development of remarkably deadly and effective pathogens. They have figured out how to create viruses that can evade immunity. They have learned how to evolve existing viruses to spread more easily through the air, and how to engineer viruses to make them more deadly. It remains unclear whether COVID-19 arose from such activities or entered the human population via interaction with wildlife. Either way, it is clear that biological technology, now boosted by artificial intelligence, has made it simpler than ever to produce diseases.

Should a human-made or human-improved pathogen escape or be released from a lab, the consequences could be catastrophic. Some synthetic pathogens might be capable of killing many more people and causing much more economic devastation than the novel coronavirus did. In a worst-case scenario, the worldwide death toll might exceed that of the Black Death, which killed one of every three people in Europe.

Averting such a disaster must be a priority for world leaders. It is a problem that is at least as complex as other grand challenges of the early Anthropocene, including mitigating and managing the threat of nuclear weapons and the planetary consequences of climate change. To handle this danger, states will need to start hardening their societies to protect against human-made pathogens. They will, for example, have to develop warning systems that can detect engineered diseases. They must learn how to surge the production of personal protective equipment and how to make it far more effective. They will need to cut the amount of time required to develop and distribute vaccines and antiviral drugs to days, instead of months. They will need to govern the technologies used to create and manipulate viruses. And they must do all this as fast as they can.

RISKY BUSINESS

For more than a century, most people have seen biology as a force for progress. By the early twenty-first century, vaccines had helped humanity eradicate smallpox and rinderpest, and nearly eradicate polio. Success has been piecemeal; many infectious diseases have no cure, and so the outright eradication of pathogens remains an exception, not the rule. But the advances have been undeniable. The qualified nature of humanity’s accomplishments is perhaps best exemplified by the HIV pandemic. For decades, HIV killed almost everyone it struck. It continues to infect millions of people each year. But thanks to scientific innovation, the world now has cocktails of drugs that block viral replication, which have turned the disease from a death sentence into a manageable medical condition. This sort of medical progress depends on distinct and loosely coordinated enterprises—each responding to different incentives—that deliver care, manage public health, and carry out scientific and medical research.

But progress can be a double-edged sword. If scientists’ growing understanding of microbiology has facilitated great advances in human health, it has also enabled attempts to undermine it. During World War I, the Allies studied the use of bacterial weapons, and German military intelligence operatives used such pathogens to attack animals the Allies used for transport. They sickened horses and mules in France and Romania. In Norway, they attempted to infect reindeer used by the Sami to deliver weapons to Russian forces. German officers even managed to infect corrals and stables in the United States that were full of animals headed to Europe.

By the time World War II began, these initiatives had matured into weapons designed to kill humans. In Japanese-occupied Manchuria, the Japanese military officer Shiro Ishii had his forces preside over the dystopian Unit 731, in which they tested biological weapons on humans. They infected and killed thousands of prisoners with anthrax, typhoid, paratyphoid, glanders, dysentery, and the bubonic plague. During the final days of the war, Ishii proposed a full-on biological-warfare operation—titled Cherry Blossoms at Night—in which Japanese seaplanes would disperse bubonic-plague-infected fleas over major American West Coast cities. But the plan was vetoed by the chief of the army general staff. “If bacteriological warfare is conducted,” the chief noted, “it will grow from the dimension of war between Japan and America to an endless battle of humanity against bacteria.”

It is wrong to assume states and terrorists lack the will or the means to build biological weapons.

Such thinking did not stop other countries from researching and developing biological weapons. In the 1960s, the U.S. Department of Defense launched Project 112, which experimented with how to mass distribute offensive pathogens. To do so, the army dispersed spores in the tunnels of the New York City subway and bacteria in aerosols from boats in San Francisco Bay. It sprayed chemicals from army planes over thousands of square miles, from the Rockies to the Atlantic and from Canada to the Gulf of Mexico. As U.S. officials saw it, these weapons were a kind of insurance policy against a Soviet nuclear attack: if Moscow hit the United States and neutralized Washington’s own nuclear arsenal, the United States could still devastate the Soviet Union by counterattacking with deadly pathogens. By the middle of the decade, the department committed to developing lethal and incapacitating biological weapons. As the 1960s drew to a close, government scientists were producing sizable quantities of deadly bacteria and toxins that were devised, in the words of the microbiologist Riley Housewright, to “confound diagnosis and frustrate treatment.”

These developments, however, terrified civilian researchers, who pushed back against Washington’s plans. They found a receptive audience in the White House. In 1969, U.S. President Richard Nixon decided to halt his country’s biological weapons program. He also called for an international treaty banning such initiatives. Outside experts bolstered his message. Shortly after Nixon’s announcement, Joshua Lederberg—a Nobel Prize–winning biologist—testified before Congress in support of a global ban. Biological weapons, he said, could become just as deadly as nuclear ones. But they would be easier to construct. Nuclear weaponry “has been monopolized by the great powers long enough to sustain a de facto balance of deterrence and build a security system based on nonproliferation,” Lederberg said. “Germ power will work just the other way.”

But Washington’s main adversary was not persuaded. In 1971, as the world haggled over a treaty, the Soviet Union released a weaponized strain of Variola major—the smallpox virus—on an island in the Aral Sea. It resulted in a smallpox outbreak in present-day Kazakhstan. The outbreak was contained through heroic efforts by Soviet public health officials, but those efforts succeeded only because of the affected region’s sparse population and because most Soviet citizens had been vaccinated and possessed some immunity.

Later that year, the Soviet Union and the United States agreed to a treaty banning biological weapons, called the Biological Weapons Convention. The UN General Assembly universally commended the agreement, and in 1972, it opened for signing in London, Moscow, and Washington. But the Soviets ultimately defied the agreement. In 1979, 68 people died in the city of Sverdlovsk—present-day Yekate­rinburg—after spores from a clandestine anthrax project were released. Moscow did not have any other clear-cut accidents, but the Soviets maintained a biological weapons program until their country fell apart—a program that, according to defectors, employed 60,000 people at its height. In 1991, U.S. and British representatives visited some of the program’s facilities, where they saw rows of vessels and bioreactors capable of producing thousands of liters of high-titer smallpox. Those vessels could then pump the virus through refrigerated pipes and into bomblets, which could, in turn, be loaded onto missiles.

A smallpox production facility, Pokrov, Russia, 1993
A smallpox production facility, Pokrov, Russia, 1993
U.S. government

The Biological Weapons Convention had another problem: it did not constrain private groups and individuals from pursuing such weapons. In 1984, the Rajneesh religious movement, based in Oregon, contaminated salad bars with salmonella. (Its goal was to incapacitate opposition voters so that Rajneesh candidates could win a Wasco County election.) No one died, but hundreds of people became ill. In 1995, the apocalyptic Aum Shinrikyo group injured thousands of people in Tokyo with the chemical nerve agent sarin; it had previously attempted, without success, to make anthrax weapons. In 2001, anthrax attacks in the United States targeting journalists and two U.S. Senate offices—which the FBI believes were carried out by a lone American scientist—killed five people.

The relatively small scale of these incidents could be taken as evidence that terrorists and states might currently be too constrained, perhaps by technical difficulties or existing laws, to inflict mass biological damage. But this perspective is too optimistic. Instead, they show that current international agreements and public health measures cannot prevent such attacks. These incidents also demonstrate that it is wrong to assume states and terrorists lack the will or the means to build biological weapons. Some individuals and groups do face barriers—say, an inability to access the right labs or facilities. But thanks to relentless technological advances, those barriers are falling apart.

FOR BETTER AND FOR WORSE

In 2012, a group of scientists led by Emmanuelle Charpentier and Jennifer Doudna published an article in Science, a premier academic journal. The article described an engineering system, called CRISPR-Cas9, which uses human-made chimeric RNA to edit genetic material. The invention added to an already formidable toolbox of molecular biological engineering, including what scientists call “classical recombinant DNA” (invented in the 1970s), the polymerase chain reaction (better known as PCR, and invented in the 1980s), and synthetic DNA (which also came into use in the 1980s). Together, these inventions have created an explosion of human ingenuity that powers scientific discovery and advances in medicine. In December 2023, for example, the FDA approved a complex CRISPR-based gene therapy as a cure for sickle cell anemia, a devastating illness that afflicts millions of people.

But owing to politics, economics, and the complex institutions through which biological progress reaches humans, it can take years before the newest technology’s upsides touch those in need. The CRISPR treatment for sickle cell anemia, for instance, is technically and medically complex, costly ($2.2 million per person), and time-intensive. It has therefore reached a very small share of patients. And while the world struggles to spread the benefits of these sophisticated new technologies, scientists continue to demonstrate that they can also easily cause damage. In 2018, one individual on a three-person team used recombinant DNA, PCR, and synthetic DNA to re-create horsepox, a close relative of smallpox. Another group used these tools, plus CRISPR, to engineer a different virus related to smallpox. Such research could easily be used to produce lethal toxins.

The risks are growing in part thanks to a second technological revolution: the rise of artificial intelligence. Large language models, such as those from ChatGPT and Claude, grow far more sophisticated and powerful with each new iteration. Today, the most recent versions are used every day by thousands of lab workers to accelerate their work, in part by providing a wealth of useful guidance on technical questions. In 2020, AI researchers created a system, AlphaFold, which effectively solved a Holy Grail problem in biology: predicting the three-dimensional structure of a protein from the sequence of its amino acids.

Generating pathogens is cheaper than defending against them.

But for would-be bioterrorists, these systems could ease the path to mayhem. The largest AI models appear to have been trained on the entirety of the life sciences’ published knowledge. Most of this knowledge was, of course, already available on the Internet, but no human could consume, process, and synthesize all of it. Present AI systems can also design new proteins (which enable the design of dangerous pathogens) and execute laboratory operations. Some computer scientists are even working to make automated systems that can carry out laboratory tasks. If these efforts succeed, a malevolent actor could create a deadly new pathogen by simply hijacking such automated facilities.

And it will be very difficult for authorities to stop them. Hackers have proven capable of breaking into exceedingly complex security systems, and the materials needed to generate new pathogens include reagents and equipment that are widely available. Regulators could try to target the dozens of suppliers who fill orders for key components. But there are ways around these suppliers, and closing them off could slow valuable biomedical research and development.

If bad actors do eventually produce and release a viral pathogen, it could infect vast swaths of the human population in far less time than it would take officials to detect and identify the threat and start fighting back. Generating pathogens, after all, is cheaper than defending against them. The capital costs of the facilities and materials needed to make a new disease are low, but responding to an epidemic of one involves a complex and staggeringly expensive set of components: expansive testing and detection networks, vast quantities of personal protective equipment, socially disruptive lockdowns, and an apparatus that can develop, manufacture, and distribute treatments and vaccines.

The thought of spending billions of dollars trying to stop another pandemic should be enough to deter states from weaponizing biology. Some governments, however, continue to pursue dangerous initiatives. In April 2024, the U.S. State Department assessed that North Korea and Russia have offensive biological weapons programs and that China and Iran are pursuing biological activities that could be weaponized. All are parties to the Biological Weapons Convention.

DETERRENCE BY DEFENSE

During the Cold War, the world’s nuclear powers avoided catastrophe in large part thanks to the concept of mutually assured destruction. Politicians recognized that a single nuclear attack might trigger a planet-ending retaliation—or, as U.S. President Ronald Reagan and Soviet leader Mikhail Gorbachev famously declared in 1985, “a nuclear war cannot be won, and must never be fought.” Nuclear states produced elaborate doctrines to govern their technology and deter weapons use. Governments struck a variety of international nonproliferation agreements that kept the number of countries with nuclear weapons to a minimum. And the Soviet Union and the United States created numerous systems—including treaties, command-and-control protocols, and hotlines—to diminish the chance that a misunderstanding would lead to a cataclysmic war.

But when it comes to biological weapons, the Cold War deterrence formula will not work. Mutually assured destruction relies on fear, something that was widespread in the nuclear era and is not as prevalent toward biological warfare. The current threat is dependent on a continuation of breakneck technological progress and on inventions without precedent, which makes it hard for people to fully grasp the risks. Unlike the nuclear bombings of Hiroshima and Nagasaki, no biological attacks have been world-historical events that attract enduring attention.

Mutually assured destruction also depends on a state’s ability to identify the attacker. With nuclear weapons, doing so is easy enough. But states could release biological weapons and evade detection—and, therefore, retaliation. A government could secretly release a dangerous virus and blame it on any number of other states, or even on nonstate actors.

And nonstate actors really could release deadly pathogens, a fact that makes mutually assured destruction an even less useful check. No government wants to risk the annihilation of its country, but plenty of terrorists care little about survival, and they now have access to the materials, equipment, knowledge, and technical capability needed to make biological weapons. In 1969, Lederberg warned that the consequences of unchecked biological proliferation would be akin to making “hydrogen bombs available at the supermarket.” The world of 2024 is full of supermarkets, well stocked with bomb-making materials.

At a chicken farm where bird flu was found, Mitoyo, Japan, November 2020
At a chicken farm where bird flu was found, Mitoyo, Japan, November 2020
Kyodo / Reuters

Because Cold War–style deterrence is hard to pull off, the present situation demands a different philosophy. Here, the path to deterrence is not in the capacity to retaliate. Instead, it is in a defense so strong that it makes biological attacks not worth conducting.

There is a historical template for how societies can make biological weapons unsuccessful: the end of major urban fires. For most of recorded history, the cities of the world were periodically consumed by massive conflagrations that razed their cores. But in the nineteenth century, the frequency of these fires decreased dramatically. This diminution was, in part, the product of developing better response systems, such as professional firefighting forces and fire hydrants. But mostly, the reduction was driven by mundane steps, including the introduction of less combustible building materials, the imposition of engineering standards and building codes, and requirements for liability insurance—which discouraged risky behavior. When states created sharper definitions of negligence, making it easier to launch civil suits for accidental fires, people became even more cautious.

Today’s authorities can take a page from this playbook. Governments built fire departments and hydrants to respond to urban fires. Now, they need to construct systems that can rapidly develop vaccines, antiviral drugs, and other medical interventions. Yet just as with urban fires, governments need to understand that rapid responses won’t be enough. The world could, and must, develop the ability to vaccinate its eight billion people within 100 days of an outbreak—faster than it took the United States to fully vaccinate 100 million people against COVID-19. Yet this would still not suffice for a pathogen that spread at the pace of the coronavirus’s Omicron variant.

In addition, policymakers must take steps akin to instituting better building codes—in other words, steps that make it harder for pathogens to spread. They can start by creating bigger stockpiles of personal protective equipment. Masks, gloves, and respirators are key to stopping virus transmission, and so officials should sign preparatory contracts for such wares. States should also subsidize their industrial bases so that they can surge production if needed. They should instruct manufacturers to redesign personal protective equipment to make it cheaper, more effective, and more comfortable. Governments can further augment this resilience by ensuring that people who work in essential services have especially prompt access to protective equipment. States should help furnish these sectors’ buildings with microbicidal far-UV-light purification systems and particulate filters. Combined, these measures would substantially reduce the risk that outbreaks grow into societally destabilizing events.

STEP BY STEP

There is a final way to reduce the risk of biological disasters, one that goes beyond plotting responses and defenses. It is for officials to better govern new technologies. And ultimately, it may be the only way to actually prevent a mass biological attack.

There are many tools that governments can use to regulate advances. Officials could, say, deny funding to or even outright ban particular experiments. They could require that people and facilities obtain licenses before carrying out certain kinds of work. They could be more thorough in overseeing future lab automation.

But officials should also shape the ecosystem that supports biological research and development. They should, for example, require that firms selling nucleic acids, strains, reagents, and other life-sciences equipment used to make biological agents adopt “know your customer” rules, which require that companies confirm their customers’ identities and the nature of their activities. They also ensure that goods are shipped only to known, legitimate sites. (Many governments have long forced financial institutions to follow know-your-customer rules, in order to prevent money from flowing into criminal networks.) In addition, policymakers should be able to better regulate conduct. Governments should devise new ways to detect prohibited biological activity so that law enforcement and intelligence agencies can head off attacks before they take place.

For all their upsides, AI and bioengineering carry immense perils.

Finally, starting today, states will need to craft their biodefense policies with AI in mind. Currently, before releasing large language models, companies invent and install various safeguards, such as “redlines” that users cannot cross. ChatGPT-4 and Claude 3.5 Sonnet, for example, refuse to answer direct questions on how to evolve a virus to kill farm animals. But if users ask for technical guidance on how to engage in such directed evolution without using the word “kill,” these models will give good guidance. AI models therefore need additional safeguards against handing out dangerous information, and governments should help create them.

It will not be easy to reduce the risks that come from these new technologies, and some governance measures risk slowing down legitimate research. Policymakers must be thoughtful as they contemplate restrictions. But smart oversight is essential. The reality is that for all their upsides, AI and bioengineering carry immense perils, and societies and governments must honestly assess the present and future benefits of these developments against their potential dangers.

Officials, however, should not despair. The world, after all, has avoided existential catastrophe before. The Cold War may not provide a template for how to address today’s challenges, but its history is still proof that society can contain dangerous inventions. Then, as now, the world faced an innovation, developed by human ingenuity, that imperiled civilization. Then, as now, states could not eliminate the new technology. But governments succeeded in preventing the worst, thanks to the development of concepts and systems that kept the risk to a minimum. “For progress, there is no cure,” wrote John von Neumann, a mathematician and physicist who helped guide U.S. nuclear policy. “Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration. The only safety possible is relative, and it lies in an intelligent exercise of day-to-day judgment.”

A defining challenge for the twenty-first century will be whether the world can survive the emergence of these newer technologies, which promise to so transform civilization. As with nuclear energy, they are products of human research. As with nuclear energy, there is no way to wind them back. But society can prevent the worst by wisely exercising day-to-day judgment. “To ask in advance for a complete recipe would be unreasonable,” von Neumann said. “We can specify only the human qualities required: patience, flexibility, intelligence.”

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and over a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print and online, plus audio articles
Subscribe Now
  • ROGER BRENT is Professor of Basic Sciences at the Fred Hutchinson Cancer Center.
  • T. GREG McKELVEY, JR., is a senior physician policy researcher and an adviser to the Meselson Center and the Technology and Security Policy Center at the RAND Corporation.
  • JASON MATHENY is President and CEO of the RAND Corporation.
  • More By Roger Brent
  • More By T. Greg McKelvey, Jr.
  • More By Jason Matheny