Artificial intelligence is on the warpath: social manipulation, deepfakes, voice spoofing

Artificial intelligence is on the warpath: social manipulation, deepfakes, voice spoofing

[ad_1]

The other day I calculated that now about 50 “sensational” news about artificial intelligence appear in Russian alone. For example, here are the news from the latest collection: “Artificial intelligence has revealed the truth about owls,” “Artificial intelligence has learned to read human emotions,” “Artificial intelligence has learned to manipulate opinions,” “An AI-based prosthesis has been created.” Artificial intelligence is a hot topic; leading scientists, presidents of countries and companies, and even the Pope regularly speak out on this topic. Recognizing the benefits and global impact of artificial intelligence on the economy, most opinion leaders express concerns that the uncontrolled development of AI will lead to irreversible consequences for the social sphere. Is it so?

All concerns about AI can be divided into two groups – practical and existential. And, as for practical concerns, unfortunately, some of them are already being fulfilled. Artificial intelligence is really taking away jobs. It is likely that areas such as accounting and law will be completely absorbed by AI. A significant number of jobs will be cut in medicine and veterinary medicine. While leading business consultants predict that AI itself will create about 100 million jobs globally, it will also absorb 300 million jobs.

Social manipulation by AI is no longer just a concern, but a real, pressing big problem. Deepfakes allow you to replace the faces of some people with others in a video, which is widely used for blackmail, disinformation, and political scams. Voice faking is gradually becoming a favorite and widespread tool of social engineering. Who wouldn’t agree to help a boss who calls you on the phone and asks for help to improve security? Even a skeptic is unlikely to doubt what is happening. Today, vote falsification is not only a widespread technology, but also very inexpensive. A sample of the desired voice is sold on the darknet for only 300 rubles. There are also simpler, but no less destructive, applications of AI in social manipulation. Filling social media feeds based on artificial intelligence recommendations is one of them. Manipulators have learned to use these algorithms to their advantage. Western experts, for example, believe that Ferdinand Marcos Jr. won the votes of young voters in the Philippine presidential elections precisely thanks to hacking of the algorithms of a popular social network with the help of AI bots.

AI-powered social surveillance is already happening to some extent. Recently, a video from a Chinese company went viral on Telegram channels, where a video camera continuously films the work area, using AI to identify employees who are distracted from work, calculating their breaks down to the second. Comments on the video stated that minutes and seconds spent taking breaks from work were deducted from workers’ salaries. Although we don’t know for sure whether what is shown in the video is true. Perhaps this is also a deepfake and social manipulation as part of anti-Chinese propaganda. But technically such a system is currently possible. We are accustomed to classifying cameras with AI and facial recognition as security systems, but where is the line between security and privacy?

Attacks on critical infrastructure using AI are a daily headache for all cybersecurity professionals. Although multi-layered targeted attacks, including AI-based social engineering and distributed AI-based network attacks, are rare and expensive, security experts predict their use will multiply in the coming years. Fortunately, this problem is solved with the help of AI itself, which detects AI attacks. But the arms race in this area of ​​development continues.

Next in line are AI-powered autonomous weapons. Alas, although major military powers have repeatedly publicly stated that they will not develop such weapons, AI-enabled weapons are already here. And it is used in armed conflicts.

Existential threats to AI have not yet been realized and relate more to the field of possibilities than to specific probabilities of occurrence. Nevertheless, they are given special attention. Of greatest interest here is the concept of ethics of artificial intelligence. How can AI make good decisions, especially complex ones with meaningful consequences, if it does not know what is “good” and what is “bad”? Humanity may become dependent on AI, losing its creativity, cognitive skills, and critical thinking ability. Indeed, even science fiction writers did not imagine that creativity would be the first to fall under the onslaught of AI, leaving it as the last bastion of the human. But, as it turned out, in fact, the first thing modern AI has learned to do is write music and poetry, draw pictures in a given style, like Mozart, like Van Gogh, like Pushkin. AI could lead to economic inequality and an AI arms race. The corporations and countries that have better AI will become increasingly disconnected from the corporations and countries that do not have AI. The widespread adoption of AI could lead to a breakdown in human connections, decreased empathy, and a loss of social skills. Already in the US, 20% of the population speaks only with smart speakers and voice assistants during the day. And, finally, the emergence of strong artificial intelligence, superior to human intelligence, without being burdened by human ethics and morality, is already causing concern not only among science fiction writers. Perhaps, if the development of AI continues at the same pace as the last five years, then the uprising of the machines is already very close?..

However, as for existential fears, it is too early to be afraid that they will come true. The very name “artificial intelligence” has a rather indirect relation to what is now called by this name. Modern AI is neither artificial nor actually intelligent. Once upon a time, in the 1980s, systems based on logic programming were called artificial intelligence. The idea behind “fifth generation” computers was that they would simply need to be told what to do, and they would decide how to do it themselves, based on a system of rules programmed into them. Moreover, this generation’s AI could explain its decisions. Therefore, the combination “artificial intelligence” was quite applicable to him. Some progress has been made in this direction; thanks to this research, we now have systems for automatically proving mathematical theorems. But, alas, the goal of creating rule-governed computing systems was not achieved. And the topic of AI was abandoned for a long time.

But modern AI, the heart of the popular products GhatGPT, DALL-e, YandexGPT, Microsoft AI, is something completely different. It is, in the words of the founder of modern linguistics, Noam Chomsky, “a cumbersome statistical pattern-matching machine, gobbling up hundreds of terabytes of data and extrapolating the most likely answer in a conversation or the most likely answer to a scientific question.” That is, it does not produce either new meaning or new knowledge. Therefore, there is nothing “artificial” in it; it exists thanks to big data, knowledge bases accumulated by humanity. You could even call modern AI a giant plagiarist. This is exactly what the New York Times thinks, suing Microsoft and Open AI (the creators of ChatGPT) for plagiarism, since the joint chatbot of these companies shamelessly, in whole paragraphs with minimal variations, provided newspaper texts on the requested topic on political issues.

But the main problem with AI is that current deep learning models cannot explain how they generate their answers. They cannot explain what exactly they are doing and why. This is why modern AI can hardly be called “intelligence.” Perhaps this is where the essence of existential fears lies. How can you trust the unknown in the first place? And secondly, we already know how dangerous the knowledge accumulated by humanity is. Fortunately, the approach of modern AI, which should have come up with a more precise name long ago, is unlikely to ever lead to the creation of strong artificial intelligence. So there is definitely no need to fear a “rebellion of the machines” in the coming decades. Does this make it easier for those whose jobs have been taken away by AI, or for those who have no one to talk to except their smart speaker? It is unlikely that the fault lies with the AI ​​itself. Jobs have been taken away by computers, automation, robots and machinery before. The alienation of people in developed countries and the breakdown of social ties were already increasing. Blaming AI for this is as stupid as computer games or social networks. Social manipulation used to be carried out by newspapers and television…

In this regard, is it worth becoming like the Luddites and destroying data centers? It’s just progress. Modern AI is too stupid to be seriously feared.

[ad_2]

Source link