The True Dangers of Trump’s Economic Plans
His Radical Agenda Would Wreak Havoc on American Businesses, Workers, and Consumers
Generative artificial intelligence—AI that can create new text, images, and other media out of existing data—is one of the most disruptive technologies in centuries. With this technology now more available and powerful than ever, its malicious use is poised to test the security of the United States’ electoral process by giving nefarious actors intent on undermining American democracy—including China, Iran, and Russia—the ability to supercharge their tactics. Specifically, generative AI will amplify cybersecurity risks and make it easier, faster, and cheaper to flood the country with fake content.
Although the technology won’t introduce fundamentally new risks in the 2024 election—bad actors have used cyberthreats and disinformation for years to try to undermine the American electoral process—it will intensify existing risks. Generative AI in the hands of adversaries could threaten each part of the electoral process, including the registration of voters, the casting of votes, and the reporting of results. In large part, responsibility for meeting this threat will fall to the country’s state and local election officials. For nearly 250 years, these officials have protected the electoral process from foreign adversaries, wars, natural disasters, pandemics, and disruptive technologies.
But these officials need support, especially because of the intense pressure they have faced since the 2020 election and the baseless allegations of voter fraud that followed it. Federal agencies, manufacturers of voting equipment, generative AI companies, the media, and voters need to do their part by giving these officials the resources, capabilities, information, and trust they need to bolster the security of election infrastructure. Election officials also need to be allowed to safely perform their duties, from the opening of voting through to final vote verification. Generative AI companies in particular can help by developing and making available tools for identifying AI-generated content, and by ensuring that their capabilities are designed, developed, and deployed with security as the top priority to prevent them from being misused by nefarious actors. At stake is nothing less than the foundation of American democracy.
Generative AI software creates original text, images, and other types of media using statistical models that generalize the patterns and structures from existing data. Applications that run on large language models, such as ChatGPT, take text in as a prompt and produce new text as an output. This form of generative AI can craft emails, standup routines, recipes, or college term papers in seconds. Other applications can take text inputs and create synthetic media outputs (often called deepfakes), like the viral fake photo of Pope Francis wearing a puffer jacket. AI can also generate voice-cloned audio files based on a mere snippet of recorded voice. In September, for example, a fake audio recording popped up on Facebook just two days before Slovakia’s elections; it used voice cloning to portray an interview between the leader of one of Slovakia’s political parties and a Slovakian journalist in which the progressive party leader appeared to be discussing how to rig the election.
These technologies to generate synthetic text, speech, image, and video have become increasingly accessible, lowering the barriers for those wishing to meddle in U.S. elections. In recent years, foreign adversaries have attempted to undermine the security and integrity of U.S. elections by launching cyber-intrusions, carrying out hack-and-leak operations, and leveraging troll farms and networks of social media bots to spread falsehoods. Notably, an increasing number of foreign actors are entering this space, with the Office of the Director of National Intelligence making public in December that the scale and scope of foreign activity targeting the 2022 U.S. midterm elections exceeded what the U.S. government detected during the 2018 election cycle. “The involvement of more foreign actors probably reflects shifting geopolitical risk calculus, perceptions that election influence activity has been normalized, [and] the low cost but potentially high reward of such activities,” the intelligence report said. So although these threats are not new, today’s generative AI capabilities will make these activities cheaper and more effective. Specifically, AI-enabled translation services, account creation tools, and data aggregation will allow bad actors to automate their processes and target individuals and organizations more precisely and at scale.
As the U.S. presidential election approaches, federal and state officials are acutely aware of the disruptive potential for generative AI. Although we did not observe any malicious use of generative AI in the state, local, and municipal elections held in over 30 states on November 7, we have seen the technology used in American political campaigning and in the Slovak election as noted above, as well as the November Argentine presidential election. As adversaries familiarize themselves with these increasingly accessible and powerful tools, it should be assumed they will be used more often.
More than two billion people—one quarter of the planet—are expected to vote in various elections in 2024. With people going to the polls across the globe, concerns over generative AI’s effect on elections are not limited to the United States. As the United Kingdom’s National Cyber Security Centre has noted, this year’s British general election will be the first to take place against the backdrop of significant advances in AI. The center warned that “large language models will almost certainly be used to generate fabricated content, AI-created hyper-realistic bots will make the spread of disinformation easier, and the manipulation of media for use in deepfake campaigns will likely become more advanced.”
AI allows for easier and more comprehensive data aggregation, which in turn empowers malicious actors to undertake tailored cyberattacks, including spearphishing attacks targeting specific individuals or organizations. When this is combined with high-quality AI-generated content, even the most vigilant of Internet users may be vulnerable. Generative AI can also help create strains of malware that are better at evading detection. Moreover, by helping to optimize the coordination and timing of botnet attacks, AI could enable more effective “distributed denial-of-service attacks,” where an attacker overwhelms a server with Internet traffic, which could take down election-related websites by flooding them with massive amounts of data. Similarly, AI-enhanced tools could be used to overwhelm communications at election offices—whether through robocalls, texts, or emails. Although such attacks wouldn’t affect election-related data, they could delay or prevent election officials from responding to actual voter inquiries and undermine voter confidence in the elections process.
Generative AI could also make other forms of digital attacks, including online harassment, easier. U.S. election officials already face an unprecedented level of hostility. Concerns over personal safety—including the fear that their home addresses and other personal information will be made publicly available, a practice known as doxing—have been one of the primary drivers behind a wave of resignations by experienced election officials across the country. Generative AI tools can significantly enhance harassment to include tactics like doxing by enabling the rapid and large-scale creation of content featuring personal information, fake compromising pictures, or threats.
Well before ChatGPT emerged, foreign actors used disinformation to target U.S. elections and those around the world. With generative AI, the United States can expect an increase in the scale and sophistication of these efforts in a wide variety of tactics: misleading voters about candidates by using false messaging or altered images; targeted voter suppression campaigns using generative AI to impersonate election officials to spread incorrect information about voting center locations and hours of operation; and deepfake images or videos of election workers casting and counting fake ballots, to name just a few. The hypothetical scenarios are endless, but the intention behind them is always the same: to undermine the American public’s trust in the outcome of the election.
Despite heightened concerns, the United States has the power to head off the threat the malicious use of generative AI poses to its democracy. The American electoral process is resilient, thanks in large part to the dedication of state and local election officials who work every day to administer, manage, and secure it. Election officials serving across some 8,800 election jurisdictions work tirelessly to identify, detect, and mitigate risks. Even before the advent of generative AI, election officials effectively defended election systems from the full range of cyber, physical, and operational risks, as well as the threat from foreign malign influence operations and disinformation. As a result, there is no evidence that any voting system lost any votes—or was compromised in any other way—in any national election since election infrastructure was designated as critical infrastructure in 2017, and a dedicated effort was organized at the federal, state, and local levels to track the effect of security threats on the integrity of the voting process.
Indeed, election officials frequently highlight that the only constant in election administration is to expect the unexpected. Natural-born crisis managers, they are practiced in the art of adapting to any situation and finding creative solutions. Just look to Lee County, Florida, during the 2022 midterm election. Weeks before Election Day, the county was ravaged by Hurricane Ian, the costliest hurricane in Florida’s history and the third costliest weather disaster in U.S. history. With scores of displaced voters, devastated supply chains, and significant infrastructure damage, election officials rallied and, despite all this, were able to successfully administer the election even though only 12 of the usual 97 polling locations were functional. Although these threats are clearly different in kind from the malicious use of AI, they demonstrate how election officials continue to overcome the myriad of complex challenges to the electoral process.
Over the past seven years, since the designation of election infrastructure as critical, election officials have moved aggressively to establish strong digital and physical controls on election systems and networks. They have implemented security measures to detect malicious activity more rapidly, and they have worked to reduce supply chain risk throughout election infrastructure by mandating that vendors take certain security precautions. They have migrated election websites to more secure “.gov” domain websites to prevent spoofing—directing users to fake websites—and make it easier for users to realize when they have been redirected to an outside website. Election officials have also partnered with the Cybersecurity and Infrastructure Security Agency to take advantage of threat information sharing, cyber-scanning services that identify vulnerabilities, cyber and physical security assessments, and incident response assistance.
More than two billion people are expected to vote in elections in 2024.
For generative AI cyberthreats, there are steps—many of which are the same security best practices experts have recommended for years—that will help mitigate these risks. Specifically, state and local officials can make it more difficult for adversaries by enabling multifactor authentication throughout their networks; deactivating or deleting user profiles no longer in use and ensuring that users have only those accesses necessary for their specific roles; and using what’s known as “endpoint detection and response” software to continuously alert on and enable rapid reaction to cyberthreats such as malware or unauthorized access. To combat increasingly sophisticated phishing attempts, election officials can also use email authentication protocols that help verify the authenticity of the sender and decrease the danger of malicious emails. To protect against doxing and other forms of targeted harassment, election officials should remove any personally identifying information from public-facing profiles, make personal accounts private to reduce access to photo imagery, and regularly request that personal information be removed from public records websites.
To protect against AI-generated voice cloning, election officials should establish practices where, before sensitive information is shared, even internally, requests are confirmed through secondary challenges that provide identity verification, including for real-time communications. One common low-tech best practice involves incorporating private pass phrases known to election officials and which change on specified time intervals. These phrases must then be provided during calls before any sensitive information is relayed to the person on the other end of the line who can confirm the pass phrase and thus the authenticity of the official. Separately, implementing technical controls on websites where the public can submit questions—such as public records requests—can help limit the number of AI-generated, inauthentic requests while preserving pathways for authentic human requests. Human authentication tools such as CAPTCHA, which can be integrated relatively easily into standard website operations, can also help differentiate legitimate human inquiries from automated inquiries. Although these tools are not perfect and can in some instances be defeated, they can help thwart adversaries looking to exploit paths of least resistance.
Perhaps the most important action that state and local election officials can take to reduce the effect of foreign influence and disinformation operations, including those enhanced by generative AI, is to communicate transparently and consistently with the public, solidifying their role as authoritative voices and strengthening their relationships with local media, community leaders, and constituents—well in advance of Election Day. In this context, the National Association of Secretaries of State #TrustedInfo2024 initiative is an important public education effort to promote election officials as the trusted sources of election information and drive voters directly to election officials’ websites and social media pages to ensure they get the most accurate information. In support of these efforts, the Cybersecurity and Infrastructure Security Agency continues to use the Rumor vs. Reality website launched several years ago to ensure that voters have the most accurate information related to election infrastructure security.
The key to mitigating against potential AI-enhanced threats, however, is situational awareness and operational preparedness. Election offices, other state and local offices, vendors of voting equipment, and critical enablers like Internet service providers must all work together to ensure they understand the risks and their roles in mitigating those risks, including how to get operations back up and running after an incident. To succeed, all parties involved in elections must continuously share information and train together, frequently using tools like tabletop exercises to rehearse contingency operations from established playbooks.
Generative AI is complicating the jobs of election offices at a time when many of them remain underresourced and understaffed. The high turnover of experienced election administration professionals across the country has only exacerbated the problem. Although this year will be a very challenging one for election officials, time and again they have risen to the challenge. The federal government will continue to support them with resources, information, and security services. Today, all 50 states and over 3,700 local jurisdictions and private-sector organizations are members of the Elections Infrastructure Information Sharing and Analysis Center, an initiative that provides 24-hour threat monitoring, election infrastructure cyberthreat analysis, and assistance with incident response. The Election Assistance Commission, a U.S. government agency, also offers resources such as voting system security measures and best practices that local election officials can follow to secure voting systems.
The private sector, including Internet service providers, cloud service providers, and cybersecurity firms, as well as election vendors and companies that provide voting equipment, also has a role to play. In previous election cycles, such vendors and service providers stepped up to provide local and state election offices with enhanced security measures and support services. But any company providing critical services to election offices should ask what more it can do to reduce the cyber, physical, and operational risks to election infrastructure going into the election season. In particular, generative AI companies should consider how they can support election officials, both by ensuring the overall secure design of their products and in particular by developing methods for identifying AI-generated content. Last year, a number of leading AI companies made voluntary commitments with the White House to help advance the development of safe, secure, and transparent AI, including by making available technical mechanisms to ensure that users know when content is AI generated. These tools and others used in establishing digital authenticity, such as digital watermarking, could be extremely helpful in the year ahead as election officials seek to distinguish AI-generated content from human-generated content, protect against tampering by demonstrating when content was altered after digital credentials were created, and help the public verify official content. While versions of these capabilities exist today, companies should commit to continually improving the quality and security of these authenticity products because researchers have demonstrated that such products, too, can be vulnerable to exploitation.
It is also important for the media to be aware of the threat posed by the malicious use of AI in this election cycle. Journalists should help ensure the information they relay comes from trusted, official sources; when incorrect information is circulating, they should make accurate information available. Sophisticated foreign influence operations could quickly overwhelm local election offices and exceed their ability to respond. This is where the media can be key, amplifying election officials as trusted sources of information and helping ensure that accurate information is being shared with the public.
Voters can do their part, too. There is always the opportunity to serve as a poll worker or as an election observer. And everyone can support their state and local election officials by being careful not to amplify or exacerbate the actions of nefarious actors who want to undermine the security and integrity of American democracy. Election security should not be a matter of politics or partisanship but rather preserving the integrity of the country’s most sacred democratic process. Americans must work together so that the malicious use of generative AI is just another line in a long list of challenges that the American electoral process can and has overcome.