The True Dangers of Trump’s Economic Plans
His Radical Agenda Would Wreak Havoc on American Businesses, Workers, and Consumers
Political leaders are scrambling to respond to advances in artificial intelligence. With applications from marketing to health care to weapons systems, AI is expected to have a deep effect across society and around the world. Recent developments in generative AI, the technology used in applications such as ChatGPT to produce text and images, have inspired both excitement and a growing set of concerns. Scholars and politicians alike have raised alarm bells over the ways this technology could put people out of jobs, jeopardize democracy, and infringe on civil liberties. All have recognized the urgent need for government regulation that ensures AI applications operate within the confines of the law and that safeguards national security, human rights, and economic competition.
From city halls to international organizations, oversight of AI is top of mind, and the pace of new initiatives has accelerated in the last months of 2023. The G-7, for example, released a nonbinding code of conduct for AI developers in late October. In early November, the United Kingdom hosted the AI Safety Summit, where delegations from 28 countries pledged cooperation to manage the risks of AI. A few weeks after issuing an executive order promoting “safe, secure, and trustworthy” AI, U.S. President Joe Biden met with Chinese President Xi Jinping in mid-November and agreed to launch intergovernmental dialogue on the military use of AI. And in early December, EU lawmakers reached political agreement on the AI Act, a pioneering law that will mitigate the technology’s risks and set a global regulatory standard.
Despite broad acknowledgment of the need to shepherd an AI-powered future, when it comes to international cooperation and coordination, political leaders often turn to tools from the past. The most prominent proposals for global oversight of AI seek to replicate multilateral bodies built for other purposes: UN Secretary General António Guterres and others have called for an “IAEA for AI,” for example, that would monitor artificial intelligence the way the International Atomic Energy Agency monitors nuclear technology. Last month’s British-led AI Safety Summit renewed calls for an “IPCC for AI,” referring to the UN’s Intergovernmental Panel on Climate Change.
Although the impulse to borrow from previous successes of multilateralism is understandable, simply introducing a new agency will not solve the puzzle of AI governance. The commitments that participants made at the AI Safety Summit, similar to the G-7 guidelines, were mere pledges. And in the absence of binding measures, corporations are left to govern themselves. Investors and shareholders may prefer this outcome, but politicians and citizens should be under no illusion that private AI companies will act in the public interest. The recent fiasco at OpenAI is a case in point: the board of directors’ clash with the executive leadership over the societal effect of the company’s product showcased the fragility of in-house mechanisms to manage the risks of AI.
International regulatory bodies are only successful when there are rules to which they can hold companies and national governments accountable. Political leaders should first hammer out the preconditions and content of those laws—and only then fit agencies to oversee regulation. AI’s rapid development, opacity, and changing nature make it substantively different than previous technologies, and it will require novel forms of international oversight. Rather than letting the scope of the challenge discourage them, lawmakers should take it as inspiration to innovate.
The achievements of existing international bodies may be worth emulating, but their oversight models do not translate easily to AI. Consider the IAEA. The UN-led watchdog was founded in 1957, but it was only after the Nuclear Nonproliferation Treaty came into force in 1970 that the agency was able to effectively monitor the nuclear weapons programs of participating countries and uphold safety standards. Current conversations about AI governance miss the NPT’s critical role. Political leaders are eager to dream up AI-focused institutions with monitoring capabilities, but an enforceable treaty on AI governance is nowhere in sight. Several major countries have barely made progress on domestic legislation.
British Prime Minister Rishi Sunak and others, including the former Google CEO Eric Schmidt, have taken inspiration from the IPCC, which synthesizes scientific research on climate change and hosts yearly Conference of the Parties summits. Even before the United Kingdom held its inaugural AI Safety Summit, plans for the new “IPCC for AI” stressed that the body’s function would not be to issue policy recommendations. Instead, it would periodically distill AI research, highlight shared concerns, and outline policy options without directly offering counsel. This limited agenda contains no prospect of a binding treaty that can offer real protections and check corporate power.
Establishing institutions that will “set norms and standards” and “monitor compliance” without pushing for national and international rules at the same time is naive at best and deliberately self-serving at worst. The chorus of corporate voices backing nonbinding initiatives supports the latter interpretation. Sam Altman, the CEO of OpenAI, has echoed the call for an “IAEA for AI” and has warned of AI’s existential risks even as his company disseminates the same technology to the public. Schmidt has invested large amounts of money in AI startups and research ventures and at the same time has advised the U.S. government on AI policy, emphasizing corporate self-governance. The potential for conflicts of interest underlines the need for legally enforceable guardrails that prioritize the public interest, not loosely defined norms that serve technology companies’ bottom lines.
Taking the IAEA or IPCC as models also risks ignoring the novelty of AI and the specific challenge of its regulation. Unlike nuclear arms, which are controlled by governments, AI capabilities are concentrated in the hands of a few companies that push products to market. The IPCC’s function as an independent research panel would be useful to replicate for AI, especially given the opacity of company-provided information on the technology. But facilitating research is only one step toward rule making—and effective AI governance requires rules.
Politicians and citizens should be under no illusion that private AI companies will act in the public interest.
No one can know what AI will be capable of in the future, so the policies and institutions that govern it must be designed to adapt. For one, oversight bodies must be able to enforce antitrust, nondiscrimination, and intellectual property laws that are already on the books. Governments and multilateral organizations should also agree on interpretations of first principles, such as respect for human rights and the terms of the UN Charter, in the context of AI. As AI becomes a greater part of daily life, many fundamental questions lack clear answers. Policymakers must delineate when data harvesting violates the right to privacy, what information should be made accessible when algorithms make consequential decisions, how people can seek redress for discriminatory treatment by an AI service, and what limits to free expression may be required when AI-powered “expression” includes churning out the ingredients for a deadly virus at the click of a mouse.
Although international initiatives have received a lot of recent attention, effective multilateralism depends on effective national laws. If the U.S. Congress were to put up legal guardrails for American AI companies, which include many of the leaders in the field, it could set an example for other countries and pave the way for global AI regulation. But with little chance of a deeply divided Congress passing meaningful regulation, the Biden administration’s tools for addressing AI are more limited. So far, the administration—like many governments around the world—has affirmed that AI is subject to existing laws, including consumer protection and nondiscrimniation rules. How regulators will apply these laws, however, is not at all clear. The U.S. government will need to issue guidelines on the contexts in which AI technologies must comply with current laws and ensure that regulators have the skills and resources to enforce them. Agencies that are equipped to determine whether a hotel rejected customers on the basis of their skin color, for example, will need a different set of capabilities to identify a discriminatory algorithm on a website for hotel bookings.
In some areas, existing laws have been applied to AI technologies with tentative success: plaintiffs seeking damages from autonomous vehicle manufacturers have drawn on product liability law to make their case. But further developing legal precedents may prove difficult. Regulators often lack access to companies’ data and algorithms, which prevents them from identifying violations of privacy rights, consumer protections, or other legal standards. If they are to govern AI effectively, these restrictions on proprietary information must loosen. Citizens, too, will need guidance on their legal rights when they encounter AI-powered products and systems.
In contrast to the United States, the EU will soon be in a strong position to engage other countries on binding rules for AI. The bloc’s AI Act, expected to come into force in 2026, contains measures to mitigate a wide range of risks from AI applications, including facial recognition systems and tools used to infer the likelihood of someone commiting a crime. As the most comprehensive policy of its kind in the democratic world, the EU law will serve as a starting point for multilateral discussions and could become a template for other countries’ domestic legislation. The AI Act does not address the use of AI in the military domain, however, because this policy area is reserved for national governments in the EU system. The EU’s market share gives the bloc significant leverage in international negotiations. But if its 27 member states take different positions on military applications, their disagreement will diminish the EU’s ability to push for global AI standards.
Governments want to do something about AI, but their current efforts often lack direction and force. Before setting up new international agencies, officials should put in the hard work of drafting the laws that those agencies will monitor. To start, governments should build an international consensus around a few key points. First, AI must be identifiable. As the technology advances, it is becoming harder and harder to know whether the voice on a customer service line or a text, video, or audio message comes from a person or a computer. And with its use in automated decision-making systems, AI increasingly determines people’s ability to secure employment, loans, and educational opportunities. Whenever companies use AI for these purposes—particularly generative AI, whose output is often difficult to identify as synthetic—they should be legally obligated to disclose its role. To further reduce deceptive or misleading content, legislators should require authentic messages from political leaders to be watermarked as soon as such technology is reliable.
Countries must also set limits on the use of AI-enabled weapons, including cyberweapons. The application of international laws to cyber-operations is already a murky area, and AI adds new layers of complexity. The technology increases the advantages of cyberattackers, who can potentially use generative AI to quickly scan large volumes of software for vulnerabilities. An international agreement banning certain targets for weaponized AI, such as cyber-enabled espionage or the spread of disinformation during another country’s election campaign, would set necessary guardrails and promote best practices.
Finally, AI regulation cannot be divorced from environmental protection efforts. The massive data centers used for data storage and processing require large quantities of electricity, water, and other resources, and the environmental costs of these sites are growing. Today, companies share only vague estimates of their water and electricity use. A single global reporting standard, with compliance overseen by national governments, would make environmental data available to academic researchers and journalists. This would make it possible for the public to scrutinize AI companies’ consumption of natural resources and for policymakers to impose effective restrictions.
The G-7 Code of Conduct and other proposals for AI governance are not the first attempts at multilateral cooperation. The Global Partnership on Artificial Intelligence, founded in June 2020 and backed by the Organization for Economic Cooperation and Development, convenes researchers and practitioners from 25 countries to share their findings and discuss areas for cooperation. Given the geographical limitations of its membership and the lack of binding agreements, GPAI is often criticized as being neither representative nor effective. This modest progress of GPAI—or, recently, of the AI Safety Summit—underscores the difficulty of converging on global norms in a politically fragmented world. Norms will not simply fall into place when countries join a new international institution. Instead, best practices in AI governance will likely develop in one of two ways. In the first, adversarial governments, especially the United States and China, will find common ground on the limited areas of mutual concern, such as the military uses of AI. But if geopolitical rivals are unable to overcome their differences, like-minded democracies will need to lead the way by cementing initial agreements that address specific dimensions of AI regulation.
Either way, real progress on international AI oversight will take more than getting policymakers from key countries in the same room. In her account of the IAEA, the scholar Elisabeth Roehrlich identified two essential elements that made nuclear safeguards effective: legal agreements binding the agency and its member states, and technical tools to monitor compliance. AI safeguards, too, will require new and updated laws as well as the resources and technical capacity to enforce them. Today, many political and corporate leaders are trying to jump straight to the end, focusing on overarching institutions rather than the policies that make them work. History is a valuable guide, but it is not a shortcut.