Regardless of fears that artificial intelligence (AI) might affect the result of elections around the globe, the US expertise big Meta mentioned it detected little influence throughout its platforms this yr.
That was partially because of defensive measures designed to forestall coordinated networks of accounts, or bots, from grabbing consideration on Fb, Instagram and Threads, Meta president of worldwide affairs Nick Clegg advised reporters on Tuesday.
“I don’t assume using generative AI was a very efficient instrument for them to evade our journey wires,” Clegg mentioned of actors behind coordinated disinformation campaigns.
In 2024, Meta says it ran a number of election operations centres around the globe to observe content material points, together with throughout elections within the US, Bangladesh, Brazil, France, India, Indonesia, Mexico, Pakistan, South Africa, the UK and the European Union.
A lot of the covert affect operations it has disrupted lately have been carried out by actors from Russia, Iran and China, Clegg mentioned, including that Meta took down about 20 “covert affect operations” on its platform this yr.
Russia was the primary supply of these operations, with 39 networks disrupted in complete since 2017, adopted by Iran with 31, and China with 11.
General, the amount of AI-generated misinformation was low and Meta was capable of shortly label or take away the content material, Clegg mentioned.
That was regardless of 2024 being the largest election yr ever, with some 2 billion individuals estimated to have gone to the polls around the globe, he famous.
“Individuals have been understandably involved concerning the potential influence that generative AI would have on elections in the course of the course of this yr,” Clegg advised journalists.
In a press release, he mentioned that “any such influence was modest and restricted in scope”.
AI content material, akin to deepfake movies and audio of political candidates, was shortly uncovered and did not idiot public opinion, he added.
Within the month main as much as Election Day within the US, Meta mentioned it rejected 590,000 requests to generate pictures of President Joe Biden, then-Republican candidate Donald Trump and his working mate, JD Vance, Vice President Kamala Harris and Governor Tim Walz.
In an article in The Dialog, titled The apocalypse that wasn’t, Harvard lecturers Bruce Schneier and Nathan Sanders wrote: “There was AI-created misinformation and propaganda, though it was not as catastrophic as feared.”
Nevertheless, Clegg and others have warned that disinformation has moved to social media and messaging web sites not owned by Meta, particularly TikTok, the place some research have discovered proof of faux AI-generated movies that includes politically associated misinformation.
Public issues
In a Pew survey of Individuals earlier this yr, practically eight occasions as many respondents anticipated AI for use for principally dangerous functions within the 2024 election as those that thought it might be used principally for good.
In October, Biden rolled out new plans to harness AI for nationwide safety as the worldwide race to innovate the expertise accelerates.
Biden outlined the technique in a first-ever AI-focused national security memorandum (NSM) on Thursday, calling for the federal government to remain on the forefront of “secure, safe and reliable” AI improvement.
Meta has itself been the supply of public complaints on varied fronts, caught between accusations of censorship and the failure to forestall on-line abuses.
Earlier this yr, Human Rights Watch accused Meta of silencing pro-Palestine voices amid elevated social media censorship since October 7.
Meta says its platforms have been principally used for optimistic functions in 2024, to steer individuals to respectable web sites with details about candidates and find out how to vote.
Whereas it mentioned it permits individuals on its platforms to ask questions or increase issues about election processes, “we don’t enable claims or hypothesis about election-related corruption, irregularities, or bias when mixed with a sign that content material is threatening violence”.
Clegg mentioned the corporate was nonetheless feeling the pushback from its efforts to police its platforms in the course of the COVID-19 pandemic, leading to some content material being mistakenly eliminated.
“We really feel we most likely overdid it a bit,” he mentioned. “Whereas we’ve been actually specializing in decreasing prevalence of dangerous content material, I believe we additionally wish to redouble our efforts to enhance the precision and accuracy with which we act on our guidelines.”
Republican issues
Some Republican lawmakers within the US have questioned what they are saying is censorship of sure viewpoints on social media. President-elect Donald Trump has been especially critical, accusing its platforms of censoring conservative viewpoints.
In an August letter to the US Home of Representatives Judiciary Committee, Meta CEO Mark Zuckerberg mentioned he regretted some content material take-downs the corporate made in response to stress from the Biden administration.
In Clegg’s information briefing, he mentioned Zuckerberg hoped to assist form President-elect Donald Trump’s administration on tech coverage, together with AI.
Clegg mentioned he was not privy as to if Zuckerberg and Trump mentioned the tech platform’s content material moderation insurance policies when Zuckerberg was invited to Trump’s Florida resort final week.
“Mark could be very eager to play an lively position within the debates that any administration must have about sustaining America’s management within the technological sphere … and significantly the pivotal position that AI will play in that state of affairs,” he mentioned.