How to Prevent an AI Catastrophe
Society Must Get Ready for Very Powerful Artificial Intelligence
In the space of just a few months, the specter of artificial intelligence has come to haunt the world. The release in late 2022 of ChatGPT, the most prominent of a new wave of generative AI models, has ignited concerns about the potentially disastrous consequences of the technology. Depending on the telling, AI could lead to the rapid spread of misinformation, kill democracy, eliminate millions of jobs, or even result in the end of the human species. These fears have overshadowed discussions of the technology’s promise. Whereas the rapid advances of recent decades—in telecommunications and digital technology, for instance—were often greeted with unwise euphoria, the recent leaps forward in AI have inspired much more circumspection about the direction of technological change. Many people are questioning the hype, realizing that innovation may not always be a good thing.
Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, by the economists Daron Acemoglu and Simon Johnson, was completed before ChatGPT and other AI models had been released. The current panic about AI makes the book’s emphasis on the ubiquity of techno-optimism already seem like a relic of a bygone age. But its authors nevertheless anticipate the current concerns, warning that AI is “the mother of all inappropriate technologies.” They blame a small libertarian oligarchy for creating the harmful AI tools that have already begun destroying good jobs around the world, and they warn that if national governments fail to curtail the damage done by these tools, they will exacerbate inequalities in their own societies. The state, Acemoglu and Johnson argue, must find a way to share the benefits of these advances more broadly. Countries should create an “institutional framework and incentives shaped by government policies, bolstered by a constructive narrative, to induce the private sector to move away from excessive automation and surveillance, and toward more worker-friendly technologies.”
In discussing today’s challenges, the authors draw on a millennium’s worth of examples of how societies have wrestled with technological change. Innovation is not and has never been an autonomous, natural force that people have no choice but to adapt to. How people understand technologies—the narrative they construct about the role of new inventions—helps determine whether those technologies will have positive or detrimental consequences. Once new technologies have been developed and deployed, they can have very different effects on jobs and incomes, depending on whether they are used to assist human workers or to replace them. And perhaps most important for the present moment, the new economic gains that come with technological advances will be shared widely only if the state and social institutions such as unions can provide a counterweight to the market power of the tech companies. People become impotent in the face of new technology only if society permits it.
Technological innovations—better plows in the Middle Ages, the cotton gin and the mechanized loom in the nineteenth century, computational technologies in the last 20 years—have transformed the world. But progress has never been smooth. Every wave of innovations carries its own problems; many people lose out.
How societies imagine the role of technology is almost as important as the technology itself. For technological change to proceed in a broadly beneficial way, a shared vision must take hold in society. Consider, for instance, the widely accepted vision of a relatively swift transition from fossil fuels to renewable energy, which has motivated the rapid invention and increasingly affordable adoption of alternatives, including photovoltaics and wind.
Bad narratives about technology can lead to terrible outcomes. For instance, the construction of the Panama Canal from 1903 to 1914 grew out of a spectacularly misguided vision. This waterway connecting the Atlantic and Pacific Oceans was the brainchild of Ferdinand de Lesseps, a French diplomat and the much-feted developer of the Suez Canal, which was completed in 1869. Although the Suez Canal had been long delayed, over budget, and underperforming in terms of traffic, it still fed a fever for canals in the late nineteenth century—the techno-optimism of its day—and generated huge profits for its investors. Riding high on this success, Lesseps determined to realize the long-standing dream of a canal across Central America. As with his plans for the construction of the Suez Canal, he articulated a compelling vision of the power of technology to connect the world and boost trade. But the project was an engineering disaster, badly planned and executed, as well as a human catastrophe that exposed workers to an epidemic of yellow fever. The Panama Canal Company went into receivership, and Lesseps died with a shattered reputation, the dethroned cryptocurrency king of his day. The vision was a mirage, leading Lesseps and his investors to ruin.
Acemoglu and Johnson argue that, as with the canal-building craze in the nineteenth century, the vision guiding those developing and harnessing AI today is anything but benign. These creators measure technological progress in terms of machines achieving parity with humans, thereby directing innovators to create products to replace humans. Instead, the authors suggest efforts and investment should be driven by the idea of “machine usefulness” to create technologies to help humans achieve their aims.
The book traces the origins of AI’s bias toward human replacement to the British mathematician Alan Turing, who suggested that machines could be said to “think” if their process of step-by-step algorithmic calculation produced results that were indistinguishable from human outputs. This raises the intriguing prospect of imagining an entirely different version of “progress” in computing, the authors argue, defined in terms of something humans cannot do well, although the book does not include an example. In the absence of a positive, shared vision of how to channel the powers of AI, societies will struggle to address abiding concerns about technology and the inequalities of capitalist market economies.
The most glaring impact of any major new technology is how it transforms the economy, affecting jobs and livelihoods. Innovations including the introduction of electricity in place of steam, the telegraph, and faster tractors all transformed the productivity and outputs of businesses. But innovations can also lead employers to change the size and composition of their workforces, depending on whether adding more workers will deliver higher profits or boost their marginal output (the extra output gained from each increase in input). If demand for the product grows, then the productivity-enhancing technology is good news; even if the technology replaces some workers, the overall number of jobs may grow. This was seen with the introduction of ATMs after 1969, which cut the number of bank tellers but increased the total number of jobs in banking. But if the technology results in only cost cutting and “de-skilling” (when jobs that require skilled workers replace jobs that do not), then most workers will suffer. These outcomes followed the automation of manufacturing that began in the late 1970s, which coincided with a recession and contributed to massive downsizing in U.S. Rust Belt communities.
Every major era of innovation has spawned concerns about how innovations affect jobs. In 1960, the Democratic presidential candidate, John F. Kennedy, observed that the “steady replacement of men by machines—the advance of automation—is already threatening to destroy thousands of jobs and wipe out entire plants. It is creating fear among workers, and among the families of workers. It is menacing the existence of entire communities.” These concerns continued to prey on the minds of politicians. During the 1992 U.S. presidential election, the Democratic candidate, Bill Clinton, responded to anxiety over automation by promising to introduce retraining programs and implement a strategy to create “the world’s best-educated workforce.” Otherwise, he warned, “downward trends in wages and benefits, increasing costs for health care, and more job insecurity will be the order of the day.”
The current technological wave has revived these concerns. Researchers at OpenAI, Open Research, and the University of Pennsylvania have estimated that nearly 50 percent of all U.S. jobs could have half their functions performed by AI. Historically, however, fears of mass technology-induced redundancies have generally proved to be unfounded. Indeed, contrary to Acemoglu and Johnson’s assertion that robotization has cut jobs and wages, economists, on balance, believe that firms that have used automation the most have expanded jobs and pay higher wages.
Experts are still debating these questions; the most common question, however, is, as the economist David Autor put it in the title of an essay, “Why Are There Still So Many Jobs?” The short answer is that technology does not simply destroy the need for human labor; it changes the nature of jobs in a society and therefore the kinds of workers a society needs. Automation often reduces the level of skill required to do existing tasks, a phenomenon that shaped the New Hampshire textile industry in the nineteenth century. The introduction of machines created new jobs for highly skilled, well-paid engineers who could repair and refine the sometimes temperamental new equipment while unskilled workers carried out the regular operation of the machines. As textile technology became more standardized, the specific engineering need dissipated because workers could now fix their own machines. But demand for textiles grew as the economy expanded—and with it, the number of jobs available. The result was that all workers in the sector enjoyed higher wages, not just the specialists.
Such historical analogies, although useful, can obscure the disruption that technological change can bring. The Industrial Revolution caused poverty, disease, and squalor at the same time that it created a large and prospering middle class. The impending AI-enabled automation of the jobs of lawyers, journalists, teachers, and others may have similar unintended and unforeseeable consequences. Acemoglu and Johnson do not offer optimism on this score. They write that “if everybody becomes convinced that artificial intelligence technologies are needed, then businesses will invest in artificial intelligence even when there are alternative ways of organizing production that could be more beneficial.” Both the AI-hype narrative and “the market”—the decisions made by individual businesses—are likely to lead to job losses and more middle-class immiseration.
No outcome is inevitable. The social outcomes of automation will be determined by policy and institutional responses, and history offers some useful examples of solutions. The experience of the Industrial Revolution is particularly instructive. In response to the spread of steam-driven textile factories, mining, and railroads, which unleashed new problems and created great inequalities, the state expanded its functions dramatically. It began to set standards for communications, product safety, and the provision of education, while society responded with the invention of unions, cooperatives, mutual societies, educational associations, and public libraries funded by philanthropy or subscriptions. It was not simply technology that delivered progress but also the creation of social counterweights to private power that tried to ensure that the benefits of innovations were spread widely.
Acemoglu and Johnson highlight how a similar dynamic played out in the mid-twentieth century. After World War II, demand increased, and economies in North America and Europe were expanding. Well-organized unions were effective guardians of their members’ interests. Public education became more widespread, and governments saw stewardship of markets as one of their key responsibilities. Accordingly, many Western countries embraced economic planning, whereby state agencies set a strategic direction for private investors and businesses and coordinated the necessary infrastructure and public services.
The trick will be figuring out how to achieve those kinds of outcomes in the present century. Some of Acemoglu and Johnson’s prescriptions, which include breaking up Big Tech companies, would require U.S. government actions that are highly unlikely. The authors’ suggestions for better enforcement of laws that protect competition and block monopolies are more realistic. In many countries, including the United States, the prevention of enormous mergers and acquisitions by Big Tech firms has become a priority. The United Kingdom’s Competition and Markets Authority moved in April to stop Microsoft’s takeover of the video game company Activision Blizzard, prompting Brad Smith, Microsoft’s vice chair and president, to hyperbolically declare that it was the company’s “darkest day” in four decades in the country. It was a telling overreaction from a tech giant used to getting its way. Regulators are likely to zero in on highly technical issues such as the bundling of digital products, the interoperability and interconnection of networks and systems, and standards governing the operation of the technologies. The results of lawsuits over such matters will determine whether new entrants can break into the market, and fights over such foundational issues will reveal much about how power is exercised in Silicon Valley. Governments are unlikely to try to break up the tech giants, but their efforts to rein them in will nonetheless be highly consequential.
Acemoglu and Johnson also discuss a number of familiar remedies for the inequities produced by technological progress, including a universal basic income, which would provide a guaranteed income for all citizens, including those whose jobs are lost to automation. Acemoglu and Johnson are opposed to such a program, rightly noting that a modest guaranteed income is inferior to the ability to access broader social goods such as a public transportation network or a public school system. Generally, Acemoglu and Johnson favor collective responses to protecting social welfare, such as encouraging the strengthening of unions at Amazon and Starbucks, over campaigns for better wages, working conditions, and jobs. They also propose new training programs, as well as participatory processes for discussing and determining how civil society organizations might regulate technologies.
They also favor strong privacy regulation to protect individuals from surveillance technologies. How to govern data is a subject of active debate in most countries, but there are complicated tradeoffs. For example, is the collection of a person’s location data by a mapping application an act of surveillance or simply a technical necessity? People’s use of an app seems to grant consent to companies to harness users’ data, yet securing individual consent app by app arguably puts an excessive burden on users and platform designers alike. The authors frame questions about data governance in terms of the “ownership” of data, but that overlooks the fact that useful data is largely relational—that is, produced by the interaction between an individual and various apps—and not strictly individual.
As with many books about the promise and perils of digital technologies, the list of proposals at the end of this one is both overly detailed and underwhelming. A tax on digital advertising, the rollout of training schemes, the construction of a stronger social safety net, and the imposition of wealth taxes would improve the living standards of working people and moderate the behavior of technology firms. But the authors’ list of policies—even if they could be implemented in the polarized political environments of most Western democracies—does not add up to a positive vision that can ensure digital technologies deliver shared prosperity. Fixing the problems caused by AI is not the same as determining the best form of society and how technology can help build it. Power and Progress concludes with the example of the turnaround in the perception and treatment of HIV/AIDS patients in the 1990s, when activists helped change social norms and prompted massive funding of medical research. It is an encouraging example, but it is hard to see a close parallel between the battle against the well-defined problem of treating HIV/AIDS and the broad, diffuse panic about AI and digital technologies, which raises a wide variety of policy questions.
It remains to be seen how any vision or narrative could take hold and bring about the changes the authors want to see. Artificial intelligence and other new technologies can provoke panic, but they have not inspired much clarity of thinking. The well-known observation of the Marxist thinker Antonio Gramsci comes to mind: “The old world is dying, the new is struggling to be born. In this interregnum a great variety of morbid symptoms appear.” Those symptoms are readily apparent. Less clear is whether we might miss the old world once it is gone.