Tech Accord to Lessen Deceptive AI Use in 2024 Elections

As the world faces a record breaking election year, tech companies agree to co-operate to crack down on malicious AI use.

Tech Accord to Lessen Deceptive AI Use in 2024 Elections

In a move to safeguard the integrity of global elections, major technology companies have united to combat the deceptive use of artificial intelligence (AI). This accord was signed at the Munich Security Conference, with notable participants including Adobe, Google, Microsoft, OpenAI, Snap Inc., and Meta.

The Threat of Deceptive AI

AI has the potential to be a powerful tool for spreading false information and disrupting essential systems. Malicious actors could use AI to share fake disparaging content against election candidates. 

The accord is in response to, among others, recent occurrences in elections where AI robocalls mimicked President Joe Biden to discourage voters in the primary election of New Hampshire. The FCC has stated that AI-produced audio clips in robocalls are unlawful. However, there is still a lack of regulation concerning audio deepfakes on social media and within campaign advertisements.

This accord seeks to manage the risks arising from deceptive AI election content created through publicly accessible, large-scale platforms or open foundational models.

The Accord’s Seven Principal Goals

The accord sets out seven principal goals:

  1. Prevention: Researching, investing in, and deploying precautions to limit risks of deliberately deceptive AI Election Content being generated.

  2. Provenance: Attaching provenance signals to identify the origin of content where appropriate and technically feasible.

  3. Detection: Attempting to detect Deceptive AI Election Content or authenticated content, including with methods such as reading provenance signals across platforms.

  4. Responsive Protection: Providing swift and proportionate responses to incidents involving the creation and dissemination of Deceptive AI Election Content.

  5. Evaluation: Undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing with Deceptive AI Election Content.

  6. Public Awareness: Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding Deceptive AI Content.

  7. Resilience: Supporting efforts to develop and make available defensive tools and resources, such as AI literacy and other public programs, AI-based solutions (including open-source tools where appropriate), or contextual features, to help protect public debate, defend the integrity of the democratic process, and build whole-of-society resilience against the use of Deceptive AI Election Content.

Tech Companies’ Response

At the Munich conference, Kent Walker, President of Global Affairs at Google, highlighted that democracy rests on safe and secure elections. "Google has been supporting election integrity for years, and today's accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust," Walker said. "We can't let digital abuse threaten AI's generational opportunity to improve our economies, create new jobs, and drive progress in health and science."

"We are committed to doing our part as technology companies, while acknowledging that the deceptive use of AI is not only a technical challenge, but a political, social, and ethical issue and hope others will similarly commit to action across society," the signatories said. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

The Future of Elections

The year 2024 will bring more elections to more people than any year in history, with more than 40 countries and more than four billion people choosing their leaders and representatives through the right to vote. This accord represents a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices.

Customer Stories

See how leading enterprise in-house teams have scaled smarter with Legal.io's high-caliber flex talent.

More from Legal.io


Form I-9 Carries Heavy Penalties if Filed Incorrectly
Form I-9 Carries Heavy Penalties if Filed Incorrectly

All employers must ensure that Form I-9 is properly filled out for each employee.

Aug 19, 2015
Read More
Legal.io Welcomes Tom Stephenson as VP, Community & Legal Operations
Legal.io Welcomes Tom Stephenson as VP, Community & Legal Operations

Legal.io has added former Credit Karma legal operations director Tom Stephenson to its executive team as VP, Community & Legal Operations, as reported by Law360.

May 08, 2023
Read More
Louise Pentland Joins Adobe as Chief Legal Officer Amid Strategic Shift

Louise Pentland, former legal chief at Nokia, PayPal, and Roku, has joined Adobe as CLO amid rising regulatory scrutiny and the company’s push into AI.

May 14, 2025
Read More
Why Employers Shouldn’t Ignore Mental Health 
Why Employers Shouldn’t Ignore Mental Health 

Every May since 1949 has been designated Mental Health Awareness Month in the US. But for people with mental health problems, their condition isn’t limited to one-twelfth of the calendar. Unfortunately, many employers have little or no strategy when it comes to addressing mental health problems in the workplace. Here are some of the top things employers need to know to help their employees – and their business – stay healthy.

May 04, 2020
Read More
Ready to hire?

Schedule a free consultation to discuss your hiring needs.

Free 15-min consultation
Legal.io Platform
5 star reviews
Hiring made smarter

Easy-to-use platform for hiring legal talent, managing spend, and optimizing your panel — plus an average savings of 50%.

Need Immediate Help?

Submit a hiring request and let our experts handle the entire process for you.