Tech companies sign an agreement to combat AI-generated election fraud

Major tech companies signed a pact on Friday to voluntarily take “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately mislead voters. Twelve other companies – including Elon Musk’s X – are also signing the agreement.

“Everyone recognizes that no technology company, no government, no civil society organization can single-handedly address the advent of this technology and its potentially nefarious uses,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.

The agreement is largely symbolic, but focuses on increasingly realistic AI-generated images, audio and video “that deceptively mimic or alter the appearance, voice or actions of political candidates, election officials and other key stakeholders in democratic elections, or that provide false information to voters about when, where and how they can legally vote.”

The companies do not plan to ban or remove deepfakes. Instead, the agreement outlines methods they will use to try to detect and label deceptive AI content as it is created or distributed on their platforms. It notes that the companies will share best practices with each other and provide “prompt and proportionate responses” when that content begins to spread.

The vagueness of the commitments and lack of binding requirements likely helped win over a wide range of companies, but disappointed advocates sought stronger guarantees.

“The language is not as strong as you would expect,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due and recognize that the corporations have a vested interest in ensuring that their tools are not used to undermine free and fair elections. That said, it is voluntary and we will keep an eye on whether they follow through with this.”

Clegg said each company “rightly has its own content policy.”

“This is not an attempt to put a straitjacket on everyone,” he said. “And anyway, no one in the industry thinks you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play around. hit a mole and find anything you think could deceive someone.

Several political leaders from Europe and the US also joined Friday’s announcement. European Commission Vice President Vera Jourova said that while such an agreement cannot be comprehensive, it “contains very impactful and positive elements.” She also urged fellow politicians to take responsibility not to use AI tools deceptively, warning that AI-fueled disinformation “could spell the end of democracy, not just in EU member states.” ”

The agreement at the German city’s annual security meeting comes as more than fifty countries are set to hold national elections in 2024. Bangladesh, Taiwan, Pakistan and most recently Indonesia have already done so.

Attempts at AI-generated election interference have already begun, such as when AI robocalls mimicking US President Joe Biden’s voice tried to discourage people from voting in the New Hampshire primary last month.

Just days before Slovakia’s November elections, AI-generated audio recordings imitated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers went out of their way to identify them as false as they spread through social media.

Politicians have also experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.

The agreement calls on platforms to “pay attention to context and in particular to protect educational, documentary, artistic, satirical and political expression.”

It says the companies will focus on transparency for users about their policies and work to educate the public on how to avoid falling for AI fakes.

Most companies have previously said they are protecting their own generative AI tools that can manipulate images and sounds, while also working to identify and label AI-generated content so social media users know if what they are seeing is real is. But most of these proposed solutions have not yet been rolled out and companies are under pressure to do more.

That pressure is increasing in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies largely to run themselves.

The Federal Communications Commission recently confirmed that AI-generated audio samples in robocalls are against the law, but not so are deepfakes of audio when circulating on social media or in campaign ads.

Many social media companies have already introduced policies to discourage misleading messages about election processes – AI-generated or otherwise. Meta says it removes misinformation about “the dates, locations, times, and methods of voting, voter registration, or census participation,” as well as other false messages intended to disrupt someone’s civic participation.

Jeff Allen, co-founder of the Integrity Institute and former Facebook data scientist, said the agreement appears to be a “positive step” but that he would still like to see social media companies take other measures to combat misinformation, such as drafting of content recommendation systems that don’t prioritize engagement above all else.

Lisa Gilbert, executive vice president of the advocacy group Public Citizen, argued Friday that the agreement is “not enough” and that AI companies should withhold “technology such as hyper-realistic text-to-video generators” “until there are substantial and adequate precautions in place to help us avoid many potential problems.”

In addition to the companies that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for creating the image generator Stable Diffusion.

Noticeably absent is another popular AI image generator, Midjourney. The San Francisco-based startup did not immediately respond to a request for comment Friday.

The inclusion of Musk sharply curtailed content moderation teams after taking over the former Twitter and has described himself as a “free speech absolutist.”

In a statement Friday, X CEO Linda Yaccarino said that “every citizen and business has a responsibility to ensure free and fair elections.”

“X is committed to playing its role, working with colleagues to combat AI threats while protecting freedom of expression and maximizing transparency,” she said.

__

The Associated Press receives support from several private foundations to improve its explanatory reporting on elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

Leave a Comment