It became known why humanity should be smarter than artificial intelligence

It became known why humanity should be smarter than artificial intelligence

[ad_1]

For several months now I have found myself trying to scroll through photos on social networks without even looking at them (unless they were taken by people I know well). I hate the idea that these photos might include images generated by artificial intelligence. Moreover, this is unpleasant in any case. If I recognize the work of AI, it will not bring joy: “The machine drew it, so what?” If I don’t recognize it, it’s even worse. I still can’t admire the work of the machine, but I probably can fall for deception. And all these “exciting tasks” that social networks and some media are so fond of publishing (“can you tell where the photo was taken by a person and where it was generated by artificial intelligence”), in reality, only collect the reaction of the public in order to make AI images even more convincing .

I squandered potential AI creativity instinctively, as soon as I began to realize the possible danger. But after the terrorist attack at Crocus City Hall, the danger is clearly visible. First, a video appeared with the Secretary of the National Security and Defense Council of Ukraine Alexey Danilov, where he indirectly approved of the terrorist attack in Moscow and promised to “visit more often,” but it was assumed that this was a deepfake. Then signs of a fake were found in the photos of the alleged terrorists, and this information “casts doubt on the disseminated version of the involvement of a banned organization in the terrorist attack.”

I would like to draw special attention to this: fakes and deepfakes can “cast doubt,” but they can never serve as evidence of anything. That is, even if it is irrefutably proven that the video with Danilov is a deepfake, this will by no means prove Ukraine’s non-involvement in the terrorist attack. Fake images generated by artificial intelligence never prove anything. They are needed for something completely different – precisely to “question.” I once again draw your attention, dear readers, to the fact that all the hype around the powerful ability of AI to generate something is actually enthusiasm for the fact that the information space is filled with dead husk, behind which there is no meaning, which, on the contrary, is intended to lose meaning.

Yes, people can and sometimes do staged photographs and videos that contain lies. The difference with artificial intelligence is this: in general, in principle, it is not capable of producing anything other than lies. This is its fundamental property, stemming from the fact that AI is not a person, but something pretending to be a person. Not only a staged image, but in general any “creatively” generated image by AI is a lie (the more plausible, the more deceitful). And now it has become incredibly easy to produce, as easy as never before. In a sense, artificial intelligence is the father of lies, and I will not now develop this topic, which smacks of biblical archaism, but the scale of the phenomenon should be clear.

Why is this so important to remember? Because now everything will be done to make this happen more. The information space will be flooded with fakes and deepfakes, their revelations will immediately appear, then debunking specialists will appear, and this will develop into a whole industry engaged in refutation, which still does not refute anything.

Natural questions arise. They cannot help but arise: why was all this launched at all? Why did Microsoft and OpenAI open ChatGPT to the widest public? Why did they even develop it? And if we look into the essence of these issues, we will see that there is nothing there except the desire to make money under the guise of the intention to make humanity happy. Just three years ago, humanity was already going through this, and the EU is still getting rid of hundreds of millions of doses of unclaimed vaccine. Purchased with taxpayers’ money.

But the topic of artificial intelligence is much more long-term and serious. Unlike the Covid vaccine, which either worked or didn’t, AI actually works. And they will tell us a lot of promising things about him.

We will be told that it will lead us “from an economy of scarcity to an economy of abundance.” We will be told that he will “become a constant companion to lonely elderly people,” help to learn foreign languages, generate ideas, write texts, explain complex concepts, improve communication skills and assess our psychological state. Already in the United States, financial companies are using artificial intelligence to assess whether a client (for example, taking out a mortgage loan) is providing true or false information about himself. Thus, human well-being is made dependent on artificial intelligence.

But who can judge whether artificial intelligence itself is lying? Who will evaluate whether what he offers (even more than what he offers – prescribes) to people is good? After all, people should evaluate this, right?

These are not idle questions at all, but a whole complex of ethical problems, the presence of which is recognized by the developers of artificial intelligence themselves, for example the OpenAI company (and Google). They call this the “alignment problem.” And although they are proud that the best minds of our time are involved in solving this problem, the truth is that it has not been solved. And it is not even known whether it can be solved.

For some reason, earlier, in wild times, when artificial intelligence and computers in general did not exist, it was considered normal and correct to include a huge margin of safety when designing. That is why Soviet bridges, designed for six tons, now carry trucks weighing sixty tons, and although this is not at all good for the bridges, for the time being they can withstand.

But why now, while intensively promoting and promoting artificial intelligence – and in the Russian Federation it is the state that is doing this – why is no one thinking about the safety margin? About what can happen not in the best case (which everyone hopes for), but in the worst or just bad case?

There is an idea that Russia is ruled by the Russian “maybe”, but in the West they are smart, but in the West they are prudent, but they have probably laid down safety margins! And we just need to copy.

No. They didn’t pawn it. Although the UN formulates some declarations, and the European Union even formulates laws (which will come into force no earlier than 2026 and which are unknown how to implement), literally all of humanity is now hanging on the line. “Perhaps we will taste all the advantages of artificial intelligence and be spared its disadvantages.” “Let’s enjoy the progress and hope for the best.”

Let’s not rush to enjoy and not rush to hope? Why not first admit that there is not a single reason why we cannot live without artificial intelligence? Well, there are no such reasons. Let our life without him become somewhat more boring, more inconvenient, poorer, even more dangerous in some ways, but we can live without him. And without a stream of lies we will become calmer, without annoying helpers we will become more responsible, without constant promises of abundance we will become more sober. And then we will be able to create our own human safety margin.

[ad_2]

Source link