The True Dangers of Trump’s Economic Plans
His Radical Agenda Would Wreak Havoc on American Businesses, Workers, and Consumers
Ever since the company OpenAI unveiled ChatGPT last year, there have been constant warnings about the effects of artificial intelligence on just about everything.
Ian Bremmer, the founder of the Eurasia Group, and Mustafa Suleyman, founder of the AI companies DeepMind and Inflection AI, highlight what may be the most significant effect in a new essay for Foreign Affairs. They argue that AI will transform power, including the power balance between states and the companies driving the new technology. Policymakers are already behind the curve, they warn, and if they do not catch up soon, it is possible they never will.
Sources:
“The AI Power Paradox” by Ian Bremmer and Mustafa Suleyman
“The Technopolar Moment” by Ian Bremmer
The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma by Mustafa Suleyman
If you have feedback, email us at [email protected].
The Foreign Affairs Interview is produced by Kate Brannen, Julia Fleming-Dresser, and Molly McAnany; original music by Robin Hilton. Special thanks to Grace Finlayson, Nora Revenaugh, Caitlin Joseph, Asher Ross, Gabrielle Sierra, and Markus Zakaria.
Ever since the company OpenAI unveiled ChatGPT last year, we've heard constant warnings about the effects of artificial intelligence on just about everything.
Ian Bremmer, founder of the Eurasia Group, and Mustafa Suleyman, founder of the AI companies DeepMind and Inflection AI, highlight what may be the most significant effect in a new essay for Foreign Affairs. They argue that AI will transform power, including the power balance between states and the companies driving the new technology. Policymakers are already behind the curve—and if they do not catch up soon, it is possible they never will.
Ian and Mustafa, thanks so much for joining me and for the fantastic essay titled “The AI Power Paradox” that the two of you contributed to our new issue.
Thanks, Dan.
Thanks for having us.
Mustafa, I want to start with you with a foundational question about where we are on the technology. I think a lot of us who don’t work in the field, who don’t follow this day-to-day, really sat up last year when OpenAI rolled out ChatGPT. But OpenAI had, of course, been working on that technology for years. You’d co founded a different AI company, DeepMind, in 2010, before Google bought it a few years later.
When you look back to this moment last year, when this became a kind of a central conversation in political and geopolitical conversations, was that really an inflection point? If so, what was its significance? And as you look forward a few years, what are the kinds of breakthroughs—what kind of progress do you think we’ll see as we project forward?
I definitely think it feels to a lot of people that this has come as a bolt from the blue with no precedent, and it’s really a step-function. And in some ways it is, but there’s also a multi-decade context which is important here. We started working on deep learning back in 2010 at DeepMind, and there’s been a steady and actually pretty incremental march of progress since then. To give a very high-level cartoon picture, over the previous decade, from 2010 to 2020, most of the AI researchers in the field were focused on classification. So they were trying to get deep learning models to identify objects in images, identify the meaning of phonemes in spectrograms and translate that into actual text, and various kinds of other classification tasks.
What happened is that the models got so good at classification—doing face recognition, image and video recognition well enough to partially control self-driving cars—that the natural next step from that is to generate a new object in that class. So, it went from classification to generation. The model understands the idea of a cat well enough that it can generate a new example of a cat. So, in that sense, this trajectory has been quite predictable, and it’s quite a natural move from classification to prediction. Over the next five years or so, the interesting big set of capabilities that will emerge will be those of planning and reasoning in more abstracted environments—so, being able to make decisions over extended time series.
Just to help those of us who are not deeply immersed in technology understand exactly what that means—there are, of course, lots of sci-fi scenarios that we could imagine—what are the kinds of things that will be enabled by those developments projecting forward just a few years?
I think in the next three to five years, we will be surrounded by AIs that can speak fluent, natural language, as well as you and I are communicating right now. They can hold multiple complex ideas in their working memory. So they will be able to remember something that you’ve asked them to remember, take an action on the basis of that stored state, and do so by interacting in the digital world and the physical world. So they’ll learn to use APIs [application programming interfaces] by making calls into other websites, apps, third-party data structures. But they’ll also effectively just pick up the phone and phone another AI, or phone another human, and communicate to execute on some action, or query some piece of information, or plan some activity in the same technology—language—that you and I are communicating right now.
I want to say that Mustafa and I have been friends for about a decade now. About two and a half or three years ago, Mustafa was starting to tell me: "Ian, people don’t understand just how staggering the next couple of years are going to be. You’re going to have a conversation with these bots. You’re not going to know it’s a bot. It’s going to act like a human being." And then he shows me some of the beta that he's working on, and I start having conversations with it. Mustafa’s telling me that people are having relationships with these things. And it’s pretty clear that this is a game changer, not because AI is suddenly becoming autonomous and intelligent, but rather because human beings are interacting with AI as if they are and able to use AI as if they are.
That’s an extraordinary game changer for all sorts of wonderful things in terms of increased productivity around the world, in every field—but it’s also really, really dangerous and in a very short period of time. And that made the two of us want to work a lot more closely together on the intersection of this technology and everything it’s going to bring; and policy, and how the hell we’re going to deal with this as it’s unleashed with eight billion people in societies everywhere. And there’s no way to stop it. That’s just to give you a little of a background from how I came to think about this.
And this is where, Ian, I think it’s really useful to go back to the piece you wrote two years ago called “The Technopolar Moment,” because in that analysis, you foreshadow or anticipate much of what has become even clearer and perhaps undeniable with AI breakthroughs in the last couple of years. And you two write in the new piece that “Whether they admit it or not, AI’s creators are themselves geopolitical actors, and their sovereignty over AI further entrenches the emerging ‘technopolar’ order—one in which technology companies wield the kind of power in their domains once reserved for nation-states.” What is the technopolar order, and why do you see such a change in the balance of power between private actors and states?
First of all, that felt like a wild and audacious statement for us to make in Foreign Affairs. And I appreciated that you had no problem with it. But we’re saying that this is kind of post-Westphalian. Not that governments aren’t going to matter anymore, but for all of our lives, we’ve been thinking that all the geopolitical actors that matter are states—they’re nation-states. And the technopolar moment and the idea of technopolar actors was a kind of a hypothesis—because we know the technology companies have sovereignty in the digital world.
The question is, does it matter? What does that mean for national security? Well, historically, not very much. What does it mean for the global economy? Well, it matters, but it’s still kind of marginal. If you were just to decouple the Internet, you’re not decoupling globalization. What does that mean for society? Yeah, you’ve got your goggles on, but the metaverse isn’t defining where we are right now.
But inject AI into that conversation, where it is likely to become the driver of the next phase of globalization, where it’s going to matter for literally every sector of an advanced industrial economy, where it changes the nature of national security. And the principal agents in AI that have the resources, that create the algorithms that know what they do—and also, in some cases, don’t know what they do, but they’re at least the ones who know what they don’t know, as opposed to the governments who don’t know what they don’t know. They’re all companies.
What we’re getting at in this piece is that we now have a global order, separate from the security order, separate from the economic order. We have a technological order that is truly a global order with geopolitical power that matters. And the principal actors in that order are technology companies. So what are we going to do about that? Because that’s kind of exciting and really scary.
You label this in the piece "the AI power paradox." Quickly define that paradox as you lay it out in the piece for those who haven’t read it yet.
The idea that you have, on the one hand, AI driving some of the most important outcomes—going from the Internet, where everyone has information, to AI, where people can take action. That is the most important transformation in the geopolitical environment happening in the next, say, three to five years. It’s almost impossible to see what happens beyond that. And yet, it’s the power that is not being driven by governments who make all of the rules heretofore, who are the principal actors in all of the architecture. None of us are saying that we think the architecture is perfectly run and that it’s created as it needs to be. I’m the guy who wrote about the G-Zero. I’m deeply skeptical about global leadership.
But still, in almost all areas of policy formation, smart policy analysts can get together and say, “Here’s what we need to do.” And we just get frustrated because we know that some of those actors aren’t thinking long-term enough, or they don’t want to work together, they don’t trust each other. But in principle, you know what the governments need to do. In this case, we are talking about an area where I don’t care how much political will there is; the governments actually can’t do it by themselves. They can’t. They’re incapable. And that’s a paradox.
The governance task is more essential than ever, even as it’s become more difficult for the same reason it’s more essential.
Exactly.
Mustafa, let me just spend a little time with a side of me that I would call a kind of grumpy Luddite. There have been lots of moments in the last, you know, 10 or 20 years when there’s been a moment of hype around some technology, some prediction that it will change everything. You make a pretty persuasive argument in this piece and in writings elsewhere that this time really is different. But just address the skeptic in me and others who might be listening and who might see this as another kind of bubble, and that a year or two from now we will have moved on to something else.
For the last 70 years or so, the Turing test has been the canonical North Star in artificial intelligence: Can we design a conversational AI system that is as good as a human and deceives another human into believing that it is, in fact, a human speaker, and not an AI? That very famous ’’imitation game’ was the guiding light for many decades. And I think it’s pretty clear, now that many people have had the opportunity to interact with ChatGPT and other models, like my own Pi for Inflection, that we’re getting pretty close to passing that test.
I think the interesting thing is that that’s actually been a fairly predictable trajectory if you actually look at the total amount of computing that has been used to train these models over the last couple of decades. There has been a 10x increase in the amount of computing used every single year for the last ten years. That’s remarkable—ten orders of magnitude more computation. So, the total amount of computing has been growing exponentially. The quality and performance of the models are unequivocally approaching human-level performance, not just in conversation but in image recognition, image understanding, image generation, and increasingly on video.
So it’s quite easy to scale that trajectory for the next three to five years. And just imagine what sort of capabilities might emerge if we 10x the computing again and again and again over the next three to five years. And that is destined; we’re for sure going to train models that are three generations larger than the current frontier. And it’s quite important for people to remember: people will be familiar with the idea of GPT-4 and GPT-3, but although the increment between three and four sounds like a single scalar value, it is, in fact, an order of magnitude, it’s 10x larger. GPT-4 is ten times larger than GPT-3, and GPT-3 is ten times larger than GPT-2. Each generation is an order of magnitude. So that gives you a sense of what’s happening here.
And these are all-to-all connections; the model is learning the relationship between every input word and all the other input words it has seen. That’s why the computation makes such a huge difference, because niche, minute subtleties in the relationship between different words and ideas and capabilities emerge when you have more all-to-all connections, and that’s what we’re getting with every 10x increase. So, I’m very confident that this trajectory is going to yield improved capabilities over the next few years.
That’s both a wonderful and really scary thing. And you already see breakthroughs being driven by AI in so many scientific fields, where you’re able to, for example, reduce energy use in airplanes by figuring out exactly where micro wind patterns are using massive large data that can be then crunched by AI. And there’s nobody out there that codes that would say “I can code by myself” anymore—that’s quaint, of course I’m going to use AI. That’s awesome. But that’s also surely true of people who engage with malware, spear-phishers and cybercriminals. Can you imagine those days when we were doing that ourselves? Can you imagine the days when we had to create our own bioweapons in the lab? How dangerous it used to be? Now, we can create them using AI algorithms.
And the fact that people can have relationships with these algorithms. And it’s not that the algorithms are thinking. It’s not that they’re trying to become our robot overlords. There’s no consciousness happening. There’s no consciousness about to happen. But society has the potential of breaking under that, because you no longer know what is and what is not disinformation.
And that’s why this is so urgent: the risk of these AI tools being used not just by all of the private sector actors and public sector actors that want to drive efficiency and productivity for all of us, and improve our lifespan and our education and our health care—all things that I truly believe, and Mustafa believes, will happen in short order. But we also are going to see millions and millions of tinkerers and bad actors that are going to do horrible things with this AI if we do not govern it appropriately, if we don’t regulate it appropriately, and do so without breaking the innovations, without stifling corporations.
So I think of this very similarly to our last wave of globalization. And if you read Hans Rosling, or you look at Our World in Data, or Steve Pinker, and you look at the last 50 years, you go, “Oh, my God, the human trajectory.” You look at how amazing that’s been for lifespan and education and health care and everything else. And then you realize that there were these horrible negative externalities that got us the anti-establishment populism that we have today, that got us the climate change that we have today. And the companies that profited from all of the benefits of globalization have not paid for the negative externalities that have come from that globalization, from globalism. They continue to profit from it, but they won’t pay for it.
And the negative externalities that will come from AI are not going to come in 50 or 100 years; they’re going to come now. They’re going to come at the same time as all of the positive externalities. And so we’re going to have to figure out a way that gets paid for. Because if no one is paying for it, then you know who’s going to pay for it? We, as citizens, are going to pay for it. And I think we already see the beginnings of that.
Ian, I want to linger on one particular set of uses for these tools. And that’s the set of uses that if you’re sitting in a foreign ministry or ministry of defense or ministry of state security in Beijing or in Washington or in Pyongyang or anywhere else you’re thinking about, what are the ways that this will affect geopolitics directly? Even before we get into the shifts in power, how are geopolitical actors going to be using AI tools, whether in war, diplomacy, or intel, as you look over the next three to five years?
They’re clearly going to use it in the digital security space; in other words, to try to understand how data is protected and also how data is stolen—and how elections, for example, how misinformation, disinformation, can change what people believe, the propaganda wars and vehicles that we already see happening. And I think the United States—and democracies—are particularly vulnerable to this.
The Chinese have a much harder line in terms of what AI LLM large language models are allowed to do, what data they can train on, and what people can use it for, because they understand that they want to align these things with political stability. And if they give up some economic growth as a consequence, that’s just fine, especially in the consumer digital market. For the United States, it’s private sector corporations that are doing all that driving that are going to be benefiting; but the fact that civil society might be an unwitting victim of all of that is not their fault. And I think that foreign governments absolutely can take advantage of that.
One of the biggest changes in the way I’ve thought about the world—Dan, you’ll remember, I wrote a book back in 2006 called “The J Curve,” and it looked at the relationship between a country’s stability and its openness. Not its democracy, but its openness—in other words, the ability for information, goods, services, people, ideas, to get in and out of its borders and to travel within the country. And at that point, countries that were stable because they were open were more stable than countries that were stable because they were closed. In other words, ultimately, the United States was more stable than China. Right?
Well, now that we’ve gone through the beginnings of the AI revolution, and the data revolution, and the surveillance revolution, I increasingly feel like that J curve is a U. I increasingly feel that authoritarian states wielding technology can become more politically stable from being closed and that the United States, as an open society, but with technology companies that are driving the control of these algorithms and are susceptible to foreign actors, whether government or non-government, that can take advantage of that, actually undermines and weakens the political stability of democracies.
And again, that is an issue that Mustafa and I are fundamentally concerned about because we don’t want that trajectory. We don’t for the next five years of AI explosion, for the solution to be that democratic governments are going to have to become much more authoritarian and centralized and control over information for people. That’s not a society that Mustafa and I are looking forward to living in. And that’s one of the reasons we talk about the necessity of a hybrid model. Because if we don’t have a hybrid model, we’re worried that the outcomes with either the governments trying to control everything or the technology companies by default controlling everything will actually be really bad for civil society.
Let’s get to the hybrid model. The two of you capture really nicely, at a high level, the objective for policymakers and for governance when it comes to AI development over the next few years. You say that the task is “to identify and mitigate risks to global stability without choking off AI innovation and the opportunities that flow from it.”
Mustafa, you are relatively unique in that you’ve spent time working in politics and government, but also have done some pioneering work in AI. It’s easy to watch a congressional hearing—even for me, who has none of the technological expertise you do—and be chagrined at the lack of even basic awareness from members of Congress and other policymakers. But in fairness, it is hard; this is fast-moving stuff; these are people who are tracking lots of different issues. What is the state of awareness, of expertise, in the policy community when it comes to these kinds of questions? And what would you do to improve it; what would it take to get it to the place where it needs to be?
That’s a great question. I think that the state of awareness is higher than it’s ever been, and radically higher than it was a year ago, which is pretty remarkable. I think there's a deep concern. And generally, everyone is aware that this is the beginning of a new transformation of the same scale, potentially more, of the Internet. I think that the expertise is unsurprisingly a little bit sparse. But I think people are often too downbeat about that lack of expertise and almost sort of desperate that we’re never going to be able to assemble the expertise, and I think there are a number of counterexamples that are worth attention.
The expertise in the European Union is exceptional. Setting aside what you think about the final outcome of the EU AI Act in terms of its policy proposals, the depth of understanding with respect to how AI is trained, what sort of settings it’s used in, what potential risks it creates, how it should be treated differently depending on the different application environments and the stakes involved—it’s quite remarkable. I think it’s exceptionally talented people with a very accurate technical assessment of what it can and can’t do. And that is, frankly, a testament to their slow, deliberate, technocratic attention that began three or four years ago. They’ve been very good at consulting and bringing in a very wide range of academic experts, technical experts from industry, and so on.
So I think this is eminently doable. What it takes is that over the next few years, it’s pretty essential to me that every cabinet should have its own CTO [chief technology officer], if not a responsible person for AI itself. I think we’ve been far too fixated on the data. Whether government data is valuable, whether it should be made available, whether we have our own sort of sovereign data stores—this is a complete distraction. We should focus less on the data and more on the experimental efforts to test the algorithms themselves.
Essentially, this is going to land us in the behaviorist regime, right? Just as I can’t interrogate the detailed connections in your brain that lead you to say something to me in this given moment, I can’t interrogate your underlying training data. I think it’s quite reasonable to assume that we should take a behaviorist approach to evaluating these models. Are they consistently and reliably producing the same answer given a particular question? How do we evaluate those answers with respect to a distribution of fairness or bias? I think the real question is, how do we assemble the necessary technical expertise on the government side to be able to pragmatically interact with the frontier model developers—and, in fact, open-source?
So I’m pretty confident that the frontier model developers—Inflection, OpenAI, DeepMind, Anthropic, Facebook, Microsoft, Google—we’ve already signed up to President Biden’s voluntary commitments, and I’m pretty confident that that’s going to continue, and there will probably be more stringent regulations that come out of that. We’ve also got to pay attention to how this plays out in the open-source space, which is clearly much less directly accountable and more of an unwieldy evolution of ideas in its own right. And I think that’s a much more open question.
There’s a part of the readership that I’m sure is very skeptical of the idea that private companies are going to approach these questions with the right kind of public-spiritedness and attention to risks. Why should we trust the private sector actors in this context?
I think there’s a pragmatic reality that the models are being developed principally in the private sector today. And to jump back momentarily to your previous question, how do we develop expertise on the government side—governments have to be building models. That in itself is an unpopular opinion, because governments have lost confidence in their own technical ability because they’ve been stripped of expertise in the last four decades. And that’s a mistake, because you can’t regulate what you don’t understand; and in order to understand it, you have to build it. You can’t truly understand what you simply commission.
The downside of outsourcing public sector services over the last 40 years is that we’ve lost the institutional intuition for making, developing, shaping, and so on. And I think we shouldn’t expect governments to be at the cutting edge of research, inventing, and driving forward the process, but that doesn’t mean that a huge amount can’t be learned from actually training and running big models. Much of this expertise is available on the open web. And it’s clearly the hottest property in town, so everybody is trying to figure it out, even if they don’t know it. So I think that’s an important step in this picture: it’s actually understanding the models well enough to be able to make a bit of progress with them.
With respect to trusting the companies, I think this really isn’t about trust. This is about cooperation and demonstrating good behavior and collaborative intent in practice. The pragmatic reality is that this is where the models are being developed today. And companies are open to cooperating—I think that in itself is a very different tone to what we may have seen ten years ago, 20 years ago, 30 years ago, and certainly in other sectors outside of technology. So I think that we should capitalize on that momentum and see that as an opportunity to maybe start doing politics differently. This is an opportunity to be proactive, to adopt the precautionary principle, and to figure it out together. We should try and lean into that.
I want to add to that. I agree strongly with what Mustafa just said, that this is not about trusting corporations. Corporations, particularly in the United States, in many sectors, have captured the regulatory process. They have used their privileged access to U.S. political leaders on both sides of the aisle, and they have used their deep pockets and their ability to lobby—including with dark money that is untraceable by the public—to get outcomes that have nothing to do with the interests of the United States populace or the global populace. So we have a problem here.
But what we’re talking about is not trusting corporations. We’re talking about the fact that right now, the corporations are acting as sovereigns, with autonomy, and they need to be made responsible. And the way you make them responsible is by actually bringing them into new institutions where they are going to be part of the governance structure, that they’re going to become essentially treaty signatories, they’re going to have obligations—they’ll have rights too, but they’ll have obligations.
You think about what’s happened—I wrote about this in the technopolar piece a bit with Elon Musk. You’ve got a guy who uses his technology for the benefit of the Ukrainians. We all think that’s good—until he decides not to, and then we think it’s bad. And what would happen if it was Taiwan? Well, he wouldn’t provide that support. Why not? Well, because he’s got business in China. Look, the whole point here is that he should be sitting down and essentially be a part of NATO. He’s got to be a part of the military-industrial-technological complex.
So what we’re saying here with AI, where everything is dual use—these are general models, everything can be used for national security purposes, everything can be used for economic purposes—you can’t have that conversation unless the companies are part of the actual fabric of governance. And they have to be made responsible.
Now, Mustafa is one of the seven that sat down with Biden and came up with a bunch of principles that they are voluntarily ascribing to, and they’re largely things they were already basically doing. So, so far, there’s not really anything that is going to put them out. But the point is, this is the beginning of a process that is going to lead to governance. But the United States isn’t going to be able to come up with it or police it by themselves. They wouldn’t have the ability. So from day one, what we are saying is that this architecture is going to have to be built together.
And yes, the EU has a lot of expertise, though they don’t have many of the companies. And the Americans have a lot of power they can throw at it, but they don’t really have a lot of the expertise. It doesn’t matter. The point is this has to be global. Everyone’s going to have to work on it—especially because, in very short order, the technology and the number of actors involved are going to explode.
So the agility and the ability to respond and reshape this architecture together, as the technology changes radically—I mean, if the technology were only going to stay more or less incrementally where it is right now, I don’t think Mustafa and I would have needed to write this piece. What we have right now is probably okay enough for the present state of GPT. But this is going to look very, very different for all of us—for national security and our societies and the functioning of our economic system and everything else—in very short order. With most of the leaders that exist right now in positions of power, they’re going to be there when we’re dealing with this.
Ian, let me stay with you for the moment on what, from where I sit, looks like one of the biggest challenges to getting this in place, and that’s the geopolitics. Obviously, this comes down to the United States and China, especially. You spend a lot of your day, when you’re not thinking about technology, looking at some of these competitive dynamics. And you two note in the piece that “AI supremacy will be a strategic objective of every government with the resources to compete.” That’s clearly already happening. How do you see the U.S.-China dynamic affecting this? You manage to find some hope that there will be a degree of cooperation. What’s the case that there is some hope here?
One reason we think there’s hope is because we think it’s going to be obvious, in relatively short order, that this is getting very dangerous if it’s not regulated. The Americans and the Chinese both have a lot at stake in the existing system functioning. And we see that. We see that not in terms of the diplomatic relations between the two countries, which are kind of hostile and certainly devoid of trust. But we see it when we talk about macro-prudentialism, we see it when we talk about financial architecture. Both the Americans and the Chinese understand that we need the financial markets to work—that we need to avoid systemic threats.
And so when you create the architecture around that, like the Financial Stability Board, the Bank of International Settlements, the IMF—these are organizations where the Americans and the Chinese and pretty much every other actor says this is above geopolitics. Because the failure of any one individual actor could metastasize and bring down the financial markets, and we all don’t want that because we’re interoperable. We have to find ways to ensure that we avoid significant risk in the system, we avoid instability. And when a major crisis occurs, we need to all work together to ensure that that crisis is minimized and that the system stays stable.
Now, we don’t yet know how much proliferation there’s going to be in AI, but it seems likely that there’s going to be a massive amount. And that means that, first of all, today, there are all these private sector actors that are driving AI. Granted, most of them are in the United States and China. But that doesn’t mean that they’re American and Chinese in their interests or in their capabilities. They’re not necessarily fully or all that much aligned—certainly with the United States, and you could even make that argument with China.
And when it’s open source, and it’s anybody with access, and it’s a whole bunch of other actors that are involved, increasingly, the Americans and the Chinese are going to have higher incentives to create a techno-prudential response, to be part of a geo-technology stability board. And I have no illusions, Mustafa has no illusions that it will be easier to get private sector and public sector actors together in the West than it will be to bring the Chinese on board. But as you see the proliferation and the crises—and also, as you see the opportunities of being a part of it—we are hopeful this is something that can be created more globally.
Mustafa, those of us in the foreign policy and national security community are very focused on this notion of AI supremacy and competition between the U.S. and China. When you look at the state of AI development in China, is there anything that we should be attuned to? And is this competitive way of framing it meaningful at all? Is this kind of missing the point?
The problem with this competitive-arms-race framing is that it fundamentally leads to a self-fulfilling prophecy. And, of course, this isn’t unique in the context of AI. Unfortunately, it’s always been the story of technology as a driver of geopolitical and military advantage. So, whilst it’s sort of sad and frustrating in a way to see, inadvertently, this kind of slow train crash unfolding before our eyes, I’m not going to say that I can see an alternative framing. I can clearly see that it confers massive economic and military advantages. But it is still worth pointing out that the frame itself clearly accelerates the dynamic.
So I definitely think it’s true that the interventions that have been taken so far on the export controls are likely to have a seismic impact on China’s ability to train the next generation of frontier models. I mean, fundamentally, all training depends on these chips, these Nvidia GPUs, and each generation is way, way more powerful than the previous. So clearly, denying China access to that frontier is challenging; but at the same time, triggering the export controls gave them everything that they needed internally to focus hundreds of billions of dollars of investment internally. All it did was say, “Okay, we’ve fired the starting gun. The race has officially begun, and this is really a declaration of economic war.” And arguably, maybe we’ll look back in years to come and see it as the moment when a true cold war was initiated.
And I think that’s actually very, very scary. Because if you empathize with their predicament for a moment and consider that 100 miles off their shore is not just the most strategic asset on the international table at the moment, TSMC [Taiwan Semiconductor Manufacturing Company] in Taiwan, but also a territory that they have a genuine and real claim to—right, this is not some made-up story. I mean, this is clearly an inherent part of their history. So I think we have to ease off our own angry rhetoric, take a more empathetic approach, and consider that perhaps Taiwan is the shared asset that we need to drive peace and stability, not drive war. Because it would be utterly catastrophic to progress on the entire planet, for all of us, if that were somehow destroyed in a war or blown up through self-sabotage for people who didn’t want to be taken over.
So there’s a mutually assured destruction incentive of sorts to drive cooperation and concession—and we’ve started with a very aggressive position. But in some ways, it is a containment strategy. And if we can use that, the release of those chips, in some ways to drive cooperation on safety or in other areas, I can see that being quite an important incentive for collaboration in years to come.
Yeah, Mustafa’s completely right. We’re saying this is not necessarily the time when we can afford to be leaning into maximal mistrust between the two largest economies in the world. This architecture will need to be global. This architecture will need the participation of all the principal AI actors to work, and that includes the Americans and the Chinese. And Dan, you know that even saying something like that in Washington right now, in many circles, will lead people to think you guys are smoking something.
But technology doesn’t really care about how you react to this politically. Mustafa knows very well: this tech is going to continue. Like as he says in his new book, containment is not an option. That, we ain’t doing. And so we can try to contain the Chinese in terms of their capabilities in some things; we cannot contain this technology globally.
Mustafa used the phrase “mutual assured destruction.” That is often the nuclear analogy, and that basic idea is often invoked in discussions of what it will take to contain and regulate AI. You’re both a bit skeptical of the value of that history. So before we close, I want to get some sense of why you don’t see that as useful—or if you do see useful lessons in the history of arms control, where it is. So Mustafa, when it comes to the proliferation problem, why is this different from nuclear weapons?
So, nuclear weapons are incredibly capital-intensive. They require intensely protected know-how and expertise which has been the subject of anti-proliferation efforts from right after the war. The fact that uranium-235 is so difficult to find and difficult to handle, and has such an immediate carcinogenic impact on anyone who mishandles it, is already a huge disincentive to doing so. And so, in many ways, all of the characteristics of the technology and what’s involved in developing it are just very different from the fact that most of AI development is actually taking place in open-source code, on the web, using open-source datasets, and there are just fewer choke points than there are with nuclear.
In my new book, “The Coming Wave,” which is out now, I actually address this question specifically because people often ask themselves, isn’t there a lot to learn from nuclear non-proliferation? And indeed, there is. We made it an international priority. The horrors of those weapons were a huge motivator to enable the countries that did have weapons to use all the economic incentives and threats and all the political power imaginable to try and stop that proliferation.
In fact, it’s an amazing story. We’ve reduced our nuclear arsenals seismically in the last 30 years. We actually reduced the number of nuclear powers from 11 to 8 over the course of 70 years. And so far, so good. We haven’t actually had any small bad actors get access. And so there are some comparisons to AI here, but it’s not as straightforward as people like to assume.
Ian, you know that history extremely well. Any other lessons or notes of caution that you draw from it?
Mustafa and I both agree that we’re not going to see nuclear agreements that suddenly transform the AI landscape. It is true that geopolitically, we’ve been backsliding, even on the nuclear side, recently. There were weeks during the war [in Ukraine] where people in the White House thought there was a one in five, one in four, one in three chance that the Russians would use a nuclear weapon. We haven’t had that worry since 1962. The North Koreans have nukes; we tried to talk to them, and it’s basically fallen off. The Chinese are now planning on expanding their nuclear stockpiles massively over the next ten years with uranium that the Russians are providing them. So it’s gone well, and it’s gone well because it’s really expensive and because we were horrified by Nagasaki and Hiroshima; and we all saw “The Day After,” and it had an impact on us.
But AI is going to be a lot harder. It’s going to be a lot harder to manage and it’s going to be impossible to prevent from proliferating. And that means that the regulatory framework will have to be much more robust. It can’t just be about deterrence. It has to be about taking all of the responsible actors and having them act in advance. Because the state of nuclear technology has been largely the same for decades. And we decided we weren’t going to try to do Star Wars defense technology and the rest, but the fact is that the offensive capacity to blow up the world, whether it’s ten times over or 50 times over—we get it, and we’ve got it for generations now. The disruptive implications of AI are going to be completely world-changing within a decade. And our governance structures just don’t move that fast; our election cycles don’t move that fast.
So, I agree with Mustafa: the Europeans have incredible technological expertise in the EU. And yet, the EU is one of the most slow-moving bureaucracies out there. And the tech companies move fast. They’ve got the money, they are highly incentivized, they are super competitive. And they’re spending all their time on it because they know if they don’t get it, somebody else breathing down their neck is going to. So again, you have to bring the tech companies directly into this process because otherwise, we won’t move fast enough.
I mean, for me, it’s not just because I fetishize the idea of tech companies becoming our new overlords. In the case of a lot of these individuals, I don’t trust them as far as I can throw them. But I also understand that we have no choice.
If you told me we had 20 years to get it right, 30 years, 50 years—I mean, climate change, heck, we’re eventually going to get there. We’ll get to net zero. We’ll have the new technologies; you know, at the cost of a lot of species and a lot of human beings, but we will eventually get there. We don’t have climate change time on AI. We can’t get it wrong for that long; we can’t ignore it for that long. We can’t let vested interests control the outcomes for that long. And that means that we need hybrid state- and private-sector governance on this yesterday. We needed it a year ago. But, you know, Mustafa and I needed to get together and write the damn piece.
That is an appropriately bracing note to end on. Mustafa and Ian, thank you for the wonderful piece. There’s so much in it that will really be just the beginning of a longer process of trying to figure this out, but thanks for doing it and for joining me today.
Thanks a lot, Dan. It’s been a lot of fun.
Thanks, Dan.
Foreign Affairs invites you to join its editor, Daniel Kurtz-Phelan, as he talks to influential thinkers and policymakers about the forces shaping the world. Whether the topic is the war in Ukraine, the United States’ competition with China, or the future of globalization, Foreign Affairs' biweekly podcast offers the kind of authoritative commentary and analysis that you can find in the magazine and on the website.
His Radical Agenda Would Wreak Havoc on American Businesses, Workers, and Consumers
How the Next President Can Make Change in a System Built to Resist It
A Deal Could Reduce Direct American Intervention in the Middle East
How the Failure of Tehran’s Strategy Is Raising Its Appetite for Risk