Putin likened artificial intelligence to nuclear weapons: will humanity have time to realize the threat?

Putin likened artificial intelligence to nuclear weapons: will humanity have time to realize the threat?

[ad_1]

So, it has happened: Russian President Vladimir Putin has now likened artificial intelligence to nuclear weapons in a speech to a global audience. Stating that when humanity feels a “threat to its existence” from him, then the time will come to negotiate (“negotiate” is the favorite word of the President of the Russian Federation in a conversation with Tucker Carlson, but that’s not about that now).

Previously, the president made a similar statement on December 14, 2023: he compared artificial intelligence to nuclear weapons and, on the one hand, called for the development of this technology in every possible way, and on the other, again to negotiate so that the damage would not be unacceptable. I admit: at that moment I became afraid of what the world was entering at full speed. Moreover, Putin himself said: “how this will end is unknown.”

Let’s look at what is already known.

Firstly, did you know that artificial intelligence has already been taught to deceive the people who control it? Information about this also appeared last December: computer scientists from Apollo Research trained artificial intelligence to act as a trader for a fictitious financial institution and gave it a hint that it would make more profit if it did not act completely legally. After this, the AI ​​not only began to choose the possibility of illegal trading in 75% of cases, but also, what is even more significant, began to deny this in communication with managers.

Secondly, artificial intelligence is already being used to comprehensively improve surveillance of people and falsify their identity (at the same time, it does not ensure the safety of data). More recently, AI has learned to copy human handwriting. This is supposed to be a useful feature that will help decipher doctors’ illegible handwriting. Do you believe that this will be the main application of this technology?

Third, this January, International Monetary Fund Managing Director Kristalina Georgieva said that “almost 40% of global employment is affected by AI.” In simple terms, this means that people will lose their jobs. Moreover, this may affect people engaged in mental work even more. In the West, they tried to use AI to write sermons, and it turned out better than that of a not very educated priest. And the winner of Japan’s most prestigious literary prize (Akutagawa Prize), Rie Kudan, admitted that part of her book was written by artificial intelligence. And then she plans to continue in the same spirit in order to “develop her creativity.”

Or just the other day: Ben Eisenpress, chief operating officer of the Future of Life Institute, immediately sketched out five scenarios for how AI could threaten humanity. First: people create AI so powerful that they cannot control it. Second: AI accelerates the development of biological weapons, and they fall into the wrong hands. Third: someone is deliberately creating powerful cyber weapons that destroy the world’s information systems. Fourth: decision-making on the use of nuclear weapons is delegated to AI. Fifth: we are losing control over AI gradually, without noticing it. And, for example, Rosatom expert Ruslan Yunusov said on February 8 that in the next decade AI may develop self-awareness, this is possible.

And you know what’s most disturbing? All of the above, although formulated, is not perceived as a sufficient threat. Which right today (if not yesterday and the day before yesterday) should force people to “agree.”

At a grassroots level, I see one explanation for this: people prefer to view AI as “just another technology to make life easier.” Something like a smart vacuum cleaner. It’s very convenient! But artificial intelligence has already stepped beyond the boundaries of convenient technology and is rapidly moving further.

Meanwhile, Russian officials are looking forward to how they will use AI in preparing legal acts. Meanwhile, Sber is preparing a “program for AI transformation of regions.” And the use of artificial intelligence technologies by Russian enterprises will become mandatory to receive subsidies from the state (news from November last year).

What is most surprising are the creative humanities scholars, who with enthusiasm (bordering, in my opinion, on idiocy) say that nothing threatens creativity and humanities education, because in order to use artificial intelligence well, you still need to have imagination and erudition. They should remember how the proliferation of calculators in schools has just killed the ability to do mental math.

Artificial intelligence is not a nuclear weapon. People immediately knew about nuclear weapons that they were very dangerous and had to be handled with extreme caution. They don’t even understand this about artificial intelligence.

The parallel with nuclear weapons is the closest and at least somewhat adequate. But it is inaccurate. And even very far away. If only because nuclear weapons were initially interpreted as something terrible and therefore predominantly bad. People discussing nuclear weapons did not need to agree on basic concepts. Even so, they only agreed on “containment and non-proliferation” of nuclear weapons. Fear of disaster, not agreement, works to deter.

But this is not the case with artificial intelligence. With him everything is different: you say that it is bad and dangerous, but I say that it is excellent and progress. You say that we should prohibit it to the best of our ability, but I say that progress is unstoppable, long live progress. You say that artificial intelligence could be the end of humanity, but I say that it will immortalize it. You are holding on to old Earth, and I, by the way, am preparing to colonize distant planets with the help of AI…

By the time nuclear weapons appeared, humanity already knew very well what weapons were.

At the time of the advent of artificial intelligence, humanity knows little about intelligence, and from the little that it knows, it believes that “intelligence is good.”

Humanity is not used to suffering from intellect. Wars and all kinds of crimes are usually presented not as the fruit of intelligence, but as a result of its absence. Meanwhile, this is just a point of view, and it is not for nothing that war is considered one of the engines of progress (it is worth remembering how many technologies were initially invented for war and only then flowed into peaceful life).

I am afraid that before humanity even realizes the need to very strictly regulate artificial intelligence, it will have to not only “feel threatened”, but undergo very specific large-scale disasters. And I see the main problem not even in this, but in the fact that humanity will still need to retain enough intelligence to be able to grasp the connection between the disaster and the cause. To retain enough intelligence at the same time that humanity, “thanks to AI,” begins to rapidly become stupid.

[ad_2]

Source link

تحميل سكس مترجم hdxxxvideo.mobi نياكه رومانسيه bangoli blue flim videomegaporn.mobi doctor and patient sex video hintia comics hentaicredo.com menat hentai kambikutta tastymovie.mobi hdmovies3 blacked raw.com pimpmpegs.com sarasalu.com celina jaitley captaintube.info tamil rockers.le redtube video free-xxx-porn.net tamanna naked images pussyspace.com indianpornsearch.com sri devi sex videos أحضان سكس fucking-porn.org ينيك بنته all telugu heroines sex videos pornfactory.mobi sleepwalking porn hind porn hindisexyporn.com sexy video download picture www sexvibeos indianbluetube.com tamil adult movies سكس يابانى جديد hot-sex-porno.com موقع نيك عربي xnxx malayalam actress popsexy.net bangla blue film xxx indian porn movie download mobporno.org x vudeos com