The True Dangers of Trump’s Economic Plans
His Radical Agenda Would Wreak Havoc on American Businesses, Workers, and Consumers
Nearly eight years after Russian operatives attempted to interfere in the 2016 U.S. presidential election, U.S. democracy has become even less safe, the country’s information environment more polluted, and the freedom of speech of U.S. citizens more at risk. Disinformation—the deliberate spread of false or misleading information—was never the sole domain of foreign actors, but its use by domestic politicians and grifters has ballooned in recent years. And yet the country has been unable to rein it in because the very subject has become a partisan, politicized issue. Lawmakers have not been able to agree to common-sense reforms that would, for instance, require more transparency about the actions of social media companies or about the identity of online advertisers. In the process, they have enabled an environment of hearsay, in which many people, particularly conservatives, have used false or misleading information to raise the specter of a vast government censorship regime. That chimera of censorship chills legitimate academic inquiry into disinformation, undermines public-private cooperation in investigating and addressing the problem, and halts crucial government responses. The result is an information ecosystem that is riper for manipulation than ever.
I have had a unique view of this slow-motion failure. Between 2012 and 2016, I worked on democracy-support programs in Europe and Eurasia when Russia was auditioning the disinformation tactics it would later employ in the United States. I heard regularly from my colleagues in Georgia, Poland, and the Baltic states about Russia’s attempts to influence their political systems and derail their efforts to integrate with the West; Russian agents would launch cyberattacks, stage paid-for protests, and deploy armies of trolls, all in an effort to create the illusion of grassroots support for pro-Russian causes abroad. Officials in Washington and Brussels almost uniformly ignored these operations. I then watched from Kyiv in 2017 as the United States grappled with revelations of Russian interference in the U.S. presidential election: the Kremlin had sought to influence U.S. voters by spreading propaganda and lies on social media and by hacking U.S. political campaigns. By contrast, Ukrainians were not surprised to see Russia brazenly trying to manipulate the democratic process in the United States. After all, the Kremlin had used the same online assets, including the Internet Research Agency—the infamous St. Petersburg–based online propaganda company—to call into question the legitimacy of Ukraine’s 2013 Euromaidan protests ahead of Russia’s illegal annexation of Crimea in 2014.
From 2018 onward, I briefed U.S. and foreign officials and testified multiple times before Congress at the invitation of both Democrats and Republicans, always reminding lawmakers that disinformation should concern both parties. Along with many of my colleagues in academia and tech, I called for more nonpartisan action from legislators and transparency from social media platforms, championed investment in information literacy programs and public media, and encouraged government agencies to communicate in a more agile and compelling way to compete with the salacious and captivating narratives of disinformation.
But then the same salacious, captivating disinformation narratives came for me. In 2022, I was appointed the executive director of the Disinformation Governance Board, a new body in the Department of Homeland Security that would help coordinate anti-disinformation efforts within the agency. At no point did the board or I have the mission or ability to suppress or censor speech—the board’s charter made that explicitly clear. But soon after its unveiling, partisan political operatives pounced and subjected the board and me to a baseless and ruthless assault, claiming that I sought to clamp down on conservative speech. They misrepresented the board’s purpose, maligned me and my work, and spurred a torrent of death threats targeting me and my family.
Instead of backing the board and me, the U.S. government caved. It paused the activities of the board. I resigned, and the board was disbanded a few months later. The United States had failed to stand up to the very disinformation it had sought to fight. And its broader, ongoing struggle to grapple with disinformation bodes ill not just for the country but also for democracies around the world.
The United States has been slow to reckon with the threat of disinformation. In a speech in 2022, former President Barack Obama acknowledged his “failure to fully appreciate at the time [in 2016] just how susceptible we had become to lies and conspiracy theories, despite having spent years being a target of disinformation” himself. Unfortunately, greater awareness of the perils of disinformation has not produced the necessary corrective action.
Political polarization, itself fed by disinformation, has made it hard for U.S. leaders and lawmakers to curtail the spread of untruths. For instance, sensible, bipartisan legislation, such as the Honest Ads Act—a bill proposed in 2017 by Senator Amy Klobuchar, Democrat of Minnesota, and Senator Mark Warner, Democrat of Virginia, and initially cosponsored by Senator John McCain, Republican of Arizona, and then Senator Lindsey Graham, Republican of South Carolina—has languished in what congressional staffers have labeled the “disinformation graveyard.” The bill sought to close a glaring loophole in existing law: while political advertisers must admit to purchasing television, radio, and print political ads, they do not need to do so for online advertisements. As a result, foreign states such as Russia were able to quietly buy online ads in 2016 in a bid to influence U.S. voters. And yet the push to close this loophole after the 2016 election—an obvious, straightforward reform—failed to gain traction. The bill never made it out of committee in the Senate; the issue had become too politicized.
After 2016, Congress attempted in other ways to become more proactive about fighting disinformation, hauling before committees tech executives whom legislators grilled about an array of online harms, including disinformation, often with ill-informed lines of questioning. These hearings revealed that tech companies were even more unprepared to handle foreign influence campaigns than previously understood. But political polarization stymied any bipartisan action. Democrats overplayed their hand in sensationalizing the extent to which Russian disinformation swung the 2016 election; Russia’s meddling served to exacerbate existing societal fissures in the United States, but it did not on its own hand the election to Donald Trump. For their part, Republicans dismissed the notion that Russia had attempted to support Trump, despite reams of open-source evidence showing that Russian operatives had in fact sought to do just that. Republicans stonewalled any attempts to regulate social media—to do so would have been seen as abetting a Democratic agenda—even as they privately sent their staffers to discuss the threat of online influence campaigns with disinformation researchers. By the 2020 election, Republican officials and lawmakers had even abandoned that private posture; they viewed action on disinformation as the province of their opponents and at best ignored the issue or, worse, decried its very existence as a fiction concocted to legitimize the censoring of political opponents.
As the public sector has struggled to confront disinformation, so, too, has the private sector. Between the 2016 election and the start of the COVID-19 pandemic, social media platforms began to patch up some of the vulnerabilities they had disregarded for years. They expanded the teams working to detect and mitigate foreign interference. They spent a small percentage of their many billions of dollars of profit to fund research and support civil society organizations working to counteract the trends their own platforms enabled. They experimented with inserting “friction”—pop-ups, warnings, and overlays—between users and potentially harmful content. They somewhat begrudgingly engaged in transparency reporting, announcing when they took down malign content by foreign state actors. Notably, Twitter before its takeover in 2022 by tech mogul Elon Musk, who has since renamed it X, was the most transparent of all the platforms, posting relevant data sets and sharing the results of its tests of various mitigation measures for anyone to explore.
In cooperation with the U.S. intelligence community, social media companies managed to expose several foreign operations, including the Peace Data scandal in 2020 when Russia created a fake news website and paid real journalists to write articles critical of the U.S. government in a bid to turn left-wing U.S. voters against Biden. But the platforms still missed the mark, and often; they failed to address harms that originated closer to home. Members of marginalized groups had to endure hate speech, threats, and harassment encouraged both by the toxic offline discourse in the country and social media algorithms that amplified divisive, vitriolic content. Similarly, domestic disinformation, spread to advance particular political causes or to win attention and favor, exploded as people spent more time online during the pandemic and the 2020 election drew near. Although there has always been lying in politics, the reach of social media meant that these lies traveled faster and further than ever before and were targeted at the individuals most vulnerable to them.
Supercharged disinformation threatened both public health during the pandemic and the health of American democracy before and after the 2020 presidential election. For instance, powerful politicians, government officials, and media personalities amplified anti-vaccine conspiracies while secretly getting vaccinated themselves. Trump, many of his advisers, and pro-Trump media personalities repeated that behavior when they amplified bogus conspiracy theories related to the presidential election that they knew to be false, in the process producing the January 6 insurrection at the Capitol. In the last year of Trump’s term, Republicans had a choice; they could prepare to return to rhetoric based in reality, or they could enshrine disinformation as part of U.S. politics. They chose the latter.
Starting in 2018, in the face of these challenges, some people inside the federal government, within social media platforms, and in civil society attempted to work together to protect elections, public health, and public safety. These officials, researchers, and employees of technology firms published reports, exchanged insights, and strove to ensure that the type of lapses that facilitated Russia’s coordinated influence campaign in 2016 would never occur again. Their efforts helped protect the 2020 election. Christopher Krebs, the Trump-appointed director of the Cybersecurity and Infrastructure Security Agency in the Department of Homeland Security and a prominent voice in this public-private partnership, noted soon after the 2020 vote that the election was the most secure in the country’s history, contradicting the conspiratorial narrative that it had been stolen from the Republican Party. Trump subsequently fired him via a tweet.
Disinformation nearly upended the peaceful transfer of power from Trump to Biden. And yet the Biden administration has not sufficiently reckoned with the problem. Rather than immediately setting out a whole-of-government strategy to address disinformation as the threat that it is and issuing, as I urged in Foreign Affairs in 2020, “a unifying policy directive to guide agencies in working together to combat disinformation,” Biden and his advisers left the creation of such a policy to languish in the endless debates of the National Security Council. Predictably, in a sprawling government that lacks an overarching strategic vision on how to handle the disinformation threat, efforts to address disinformation have made little progress during Biden’s years in office. Agencies have duplicated efforts, fought turf wars, and desperately wanted for better internal coordination.
In the spring of 2022, the Biden administration did launch a partial effort to improve internal coordination on disinformation at the Department of Homeland Security, with the creation of the Disinformation Governance Board. As director of this new body housed in the DHS’s policy shop, I was to help the department manage its existing work on disinformation. But it amounted to little more than an intra-agency working group with no law enforcement authority, no budget, and no full-time staff other than myself. In the board’s founding documents, the department made explicit the board’s lack of purview or ability to censor, suppress, or police speech. The board was meant only to facilitate coordination within the department, a smaller-scale version of the type of activity that U.S. allies, including the United Kingdom, had implemented to combat disinformation.
Set up to protect the country from disinformation, my office was instead undone by it.
The Disinformation Governance Board failed miserably before it even got off the ground. From my first week on the job, I lobbied for the board to make a public announcement about the work it planned to do in order to stave off the inevitable misrepresentation of its mission. My research in central and eastern Europe showed that it would be easy for provocateurs to twist the benign efforts of such a government body into something sinister. Critics in the Czech Republic, for instance, had maligned the country’s Center Against Terrorism and Hybrid Threats—a counterterrorism and counterpropaganda body launched in 2017—as an attempt to police speech even though it had no such authority. But my suggestions for a full rollout for the board, complete with requisite media and Congressional briefings, were rejected.
DHS announced the board in an opaque statement eight weeks later. Within hours, the baseless ideas that the board was an Orwellian “Ministry of Truth” and that I was “President Biden’s chief censor” were trending on social media, even though the board could not and would not be restricting or refereeing speech at all. The facts did not matter to those using the board and my appointment as a political football. Within days, my personal life became a matter of intense intrigue and speculation. I was, in this right-wing telling, the young, female, easy-to-hate face of the Biden administration’s latest alleged “treason.” My professional reputation and my long track record of bipartisan work were deliberately tarnished. After Fox News characterized me as “unhinged,” “partisan,” “unserious,” and an “illiterate fascist,” congressional Republicans joined them in a months-long campaign of lies about the board and myself. I was depicted in deepfake pornography. I received an onslaught of online abuse that even threatened my family—including my unborn child, weeks away from making his way into the world.
Set up to protect the country from disinformation, my office was instead undone by it. The U.S. government did not know how to address the campaign against the board or the harassment targeting me. Instead of mounting a response to the falsehoods and hate speech, or even employing a basic countermessaging campaign as I recommended, the department and the administration essentially rolled over, issuing a weak fact sheet and placing the secretary and the White House spokesperson on the defensive days after the deluge started. Within weeks, the department decided to pause the board’s activities. (The board would later be disbanded.) In May 2022, after just two months on the job, I made the decision to leave an organization and an administration that no longer seemed willing or able to stand up to industrial-strength lies.
After attacking me and attempting to push me out of public life—an ongoing effort, as evidenced by the death threats I still receive regularly—Republicans turned their sights on others working to counter disinformation. Through the House Judiciary Select Subcommittee on the Weaponization of the Federal Government, formed in January 2023, they have demanded documents and private communications, issued subpoenas, and named leading disinformation researchers in lawsuits that are meant to occupy their time and discourage them and others from pursuing further work to understand the country’s troubled information environment. The extreme right has paired these “investigations” with lawsuits directly targeting federal government entities and research institutions, which has had a chilling effect on officials and scholars seeking to expose disinformation. Platforms themselves have rolled back many of the measures they implemented during the Trump administration to keep disinformation in check; YouTube is no longer removing conspiracy theories that claim the 2020 election was stolen, and Facebook is allowing paid political advertisements that contain such claims. Meanwhile, federal agencies have stopped cooperating with platforms, no longer electing to share the types of intelligence that brought down foreign influence operations such as Peace Data in 2020, likely for fear of litigation against government entities and employees.
Americans have every right to—and should—ask questions about the ways their government is protecting both the First Amendment and their national security, but those questions must be rooted in reality; the campaign against disinformation researchers is not. If Republicans are truly frightened of social media firms censoring conservative speech, they should pass bills to provide needed oversight over the social media platforms themselves. After all, if social media companies were more transparent, the American public would get a better and less politicized picture of the decision-making inside these firms.
Over two billion people will cast ballots in elections this year, including in the United States. Elections abroad are vulnerable to the kinds of disinformation rife in this country. In allowing politics to undermine efforts to establish social media transparency and oversight, the United States has failed in leading the world in the protection of the truth. And as long as the United States continues to fail, disinformation will only grow more pervasive and harder to contain.