How artificial intelligence is confusing voters before elections

How artificial intelligence is confusing voters before elections

[ad_1]

The active development of artificial intelligence (AI) makes the problem of the use of deepfakes – fake images, videos and audios created using AI based on real content – increasingly acute. With their help, attackers gain the trust of bank clients and steal money from them, create incriminating evidence, etc. The use of deepfakes in election campaigns is also a cause for concern, and similar cases are observed all over the world.

Fake Biden and Suharto’s appeal

In late January, days before the New Hampshire primary, thousands of local Democrats calls came in, in which they were convinced not to go to the primaries. “This Tuesday’s vote will only help Republicans in their quest to elect Donald Trump again. Your vote will matter in November (when the US presidential elections take place.— “Kommersant”), and not this Tuesday,” they heard the familiar voice of US President Joseph Biden on the phone.

The call was believed by many: the phones displayed the number of one of the famous members of the Democratic Party from New Hampshire. The caller, however, of course, was not the President of the United States. Experts from the cybersecurity company Pindrop concluded that the imitation of the US President’s voice was made using AI. The state attorney general’s office called the calls an illegal attempt to interfere with voting and launched an investigation. Prosecutors later said they believed Life Corp. produced the recordings. from Texas, and demanded that she stop such illegal activities.

The story of the fake call from President Biden, experts say, is just the beginning.

There are many important elections planned around the world this year. According to the International Foundation for Electoral Systems, national elections will be held in about 70 countries, where about half the world’s population lives, including the UK, USA, India, Pakistan, Russia, not to mention elections to the European Parliament.

And the use of deepfakes is not limited to the United States. A video of a campaign address by Suharto, who was the country’s president for three decades before dying in 2008, has circulated in Indonesia. It was distributed on various social networks and received about 5 million views.

In this video, “Suharto” called for voting for a candidate supported by one of the largest parties in Indonesia, Golkar, to which the former president himself belonged. The party did not field its own candidate in these elections, but supported one of the presidential candidates, Prabowo Subianto, who was one of Suharto’s close associates and his son-in-law. The video was first posted by Erwin Aksa, deputy head of Golkar. He said the video was meant to “remind us how important our votes are in the upcoming elections.”

The posthumous address of the Indonesian dictator was much criticized, but it turned out to be effective: Subianto’s popularity grew. Elections took place The 14th of February. According to preliminary data, Subianto received 60% of the votes. The final result will be published in March.

In another election video in Indonesia, children were generated using AI to circumvent a rule prohibiting the use of children in political videos.

Bots and AI as a threat

In January, scientists from the American George Washington University published research, which suggests that the use of AI to manipulate people’s opinions will intensify in the context of numerous election campaigns this year. “We can predict an increase in the daily activity of attackers using AI by mid-2024, just before elections in the United States and many other countries,” the study authors note.

The use of generative AI in chatbots, which have previously been used in political disinformation campaigns, could be dangerous, according to Kathleen Carley, an expert on the application of computer methods in social research at Carnegie Mellon University’s School of Information Technology.

“Generative AI itself is no more dangerous than bots. Bots in combination with generative AI are dangerous,” the expert believes. With the help of such technologies, bot comments will be more similar to what people say.

Experts note that AI tools make the possibility of disinformation cheaper and more accessible. “Social media has lowered the cost of spreading both information and misinformation. AI is reducing the cost of creating it, says Ziv Sanderson, managing director of the Center for Social Media Research and Policy at New York University. “Now, if you are a foreign malicious actor or a participant in a small domestic campaign, you can use these technologies to create multimedia content that will be somewhat attractive.”

Last week, British think tank Institute for Strategic Dialogue (ISD) published own investigation into a supposedly China-linked campaign called Spamouflage. Part of it is the dissemination of AI-generated political images on social networks before the US elections. In this case, we are not talking about deepfakes in the strict sense, but rather, for example, about such pictures: the White House in cracks or Joe Biden and Donald Trump crossing burning spears, with captions in the spirit of “Civil War” or “Collapse of American Democracy.”

As ISD analysts note, this campaign is not aimed at supporting either side, although it tends to portray Joe Biden more critically. Rather, it focuses on the polarization of American society, as well as issues such as the rise in homelessness, the proliferation of firearms, etc. According to ISD senior analyst Alice Thomas, it “portrays American democracy as a source of discord and weakness” as ” a sclerotic superpower in turmoil, unable to resolve its internal problems and act as a leader on the international stage.”

Social networks against deepfakes

Last week, during the Munich Security Conference, several major technology corporations signed an agreement on the joint fight against abuses in the electoral use of AI. Among them are Google, Amazon, Microsoft, Meta (recognized as extremist and banned in the Russian Federation), OpenAI, X. In particular, they agreed to develop tools to combat such use of AI, search for such content on their platforms and take measures to combat its spread , collaborate with scientists and civil society, etc.

Sam Altman, CEO of OpenAI, the developer of the ChatGPT generative model, admitted that he is wary of the possible use of his company’s technology in the election campaign. He noted that OpenAI is working on technologies to combat this and is going to closely monitor the situation.

Meta (recognized as extremist and banned in the Russian Federation) has also already announced its own measures to combat such content – in particular, it will label images and videos allegedly created using AI. Thus, such norms are added to the measures to combat disinformation and fake news that were taken by social networks after the American elections of 2016 and 2020.

True, critics call the measures taken by Meta (recognized as extremist and banned in the Russian Federation) insufficient and inconsistent. Moreover, according to some reports, Meta (recognized as extremist and banned in the Russian Federation) fired many employees involved in moderation and security in connection with the elections during large-scale layoffs in 2022. X has received even more criticism for this, since it was bought by billionaire Elon Musk and relaxed moderation standards and also laid off many employees.

According to social media and politics expert Katie Harbot, platforms are tired of trying to solve problems related to political content. In addition, they do not have a clear understanding of what the rules should be and the penalties for breaking them. According to her, many in technology companies believe that they should interfere in this area as little as possible.

Yana Rozhdestvenskaya

[ad_2]

Source link