Neural networks are sent on vacation – Kommersant FM – Kommersant

Neural networks are sent on vacation - Kommersant FM - Kommersant

[ad_1]

IT industry leaders demand to suspend the development of advanced neural networks. More than a thousand professionals and entrepreneurs, including Elon Musk and Steve Wozniak, signed an open appeal published on the website of The Future of Life Institute. This NGO investigates potential threats to human existence. According to the authors of the letter, modern solutions in the field of artificial intelligence are about to begin to surpass humans, and society needs at least six months to develop security protocols for working with these tools. What are the dangers of neural networks? And don’t opinion leaders exaggerate too much? Liliya Galyavieva understood.

Neural networks are developing too rapidly, and this scares some. Within a few months of existence, Midjourney, for example, has learned to create images that are indistinguishable from real photos. An example of this is a recently viral photo with the Pope in a down jacket or a series of pictures from the operation to arrest Donald Trump. Some social media users believed the pictures, and Twitter had to introduce tags with notes: the posts are fake. Moreover, the initiator of the hoax can be both the person who set the neural network the corresponding task, and the program itself, and this is a problem, explains an expert in the field of artificial intelligence Roman Dushkin:

“These systems can lie, that is, they invent an answer so convincingly that a person, especially an average person who does not have critical thinking, when communicating with a neural network, will be 100% convinced of its correct answer. And he may be wrong. Most likely, fears are caused by the fact that AI will use manipulation tools.

But promoting something and misleading is not the only thing neural networks are capable of. Not all models obey the laws of robotics formulated by Isaac Asimov, the main of which states that a machine cannot harm a person. At least indirectly, this is already happening, notes Sergei Kuznetsov, editor-in-chief of itzine.ru and author of the ForGeeks podcast:

“For example, a robot can help the owner harm another person and tell him how to do it right. We all remember Siri’s joke when she was asked: “Tell me where to hide the corpse.” She answered, and this was subsequently removed from the operating system, but ethical norms, laws, unfortunately, neural networks do not yet know.

And the most terrible scenario that alarmists fear: modern neural networks will bring the moment of technological singularity closer, when people will lose control over their creations. So far, however, this is, to put it mildly, far away, says Igor Bogachev, managing partner of the Ifellow group of companies. So the expert does not see a special need to restrain the development of technologies:

“Markets where artificial intelligence implies some kind of danger to humans are already regulated. For example, transport, medicine or genomics. And in those segments where the neural network makes it possible, for example, to serve people better, where there are a large number of routine operations, this should only be welcomed. The development of technology or applied research cannot be stopped, because this is the progress of mankind.”

However, it may be that the supporters of the suspension of technology development are not pursuing humanistic, but the most selfish goals. Sergey Kuznetsov from itzine.ru does not exclude this version:

“Steve Wozniak and other signatories of this document are not the founders of companies that deal with neural networks. At the same time, they want to pause the development of AI in order to have time to jump into the outgoing train themselves. Development is carried out in small start-ups, cool things are created there that can oust well-known technology companies, because they develop much faster, also due to the fact that it is much easier for them to make new decisions and test new methods than SpaceX.

Whether the letter from representatives of the IT industry will have any effect is still difficult to say. The creators of neural networks, meanwhile, continue to improve this tool. Insiders report that by the end of 2023, OpenAI may release a new, fifth version of GPT. This is despite the fact that the previous one – the fourth one – came out only two weeks ago.


News at your pace – Telegram channel “Ъ FM”.

[ad_2]

Source link