Scientists have revealed in which areas artificial intelligence is weaker than humans

Scientists have revealed in which areas artificial intelligence is weaker than humans

[ad_1]

The co-organizers of the meeting were the editor-in-chief of the scientific journal Lyubov Strelnikova and the head of the press service of the Faculty of Chemistry of Moscow State University. M.V. Lomonosov Sergey Ivashko invited leading scientific experts in the field of artificial intelligence for a conversation. This is the head of the laboratory of neural network technologies at MIPT Stanislav Ashmanov, the head of the department of the Faculty of Philosophy of Moscow State University Elena Bryzgalina, the assistant of the Department of Algorithmic Languages ​​of the Department of Computational Mathematics and Cybernetics of Moscow State University Natalya Efremova, the Professor of Computational Mathematics of Moscow State University Natalya Lukashevich and the Managing Director, Head of the Department of Experimental Machine Learning Systems of the General Services Division of one of the banks Sergey Markov.

AI – what is it?

The peculiar birthday of ChatGPT (from the English Generative Pre-trained Transformer) became an occasion to talk about artificial intelligence in general. This concept is almost 70 years old, and not only the well-known text generator, but also a smart vacuum cleaner, a CNC-controlled machine, and much more are associated with it.

Kismet AI robot at the MIT Museum.





Help “MK”. The term “artificial intelligence” was introduced in 1956 by American computer scientist at Stanford John McCarthy.

– Can artificial intelligence be called a simple algorithm, a program, or is it something more complex? – Lyubov Strelnikova asked the first topic for discussion.

According to Natalya Lukashevich, the first algorithms for machine translation were actually created using algorithms. But now a simple translation is not enough; the programs we are now dealing with must not only translate, but also independently derive patterns, analyze and draw conclusions. Therefore, programmers create not just algorithms for them, but so-called datasets. These are processed and structured data arrays, from which the program itself finds what it needs based on certain criteria).

ASIMO – intelligent robot





“That is, we write to the program not just direct instructions for action, but set a more complex task: “if you see this, do it this way, and if you see that, do it differently,” explains Lukashevich. – In other words, we give a tip to the correct conclusion.

According to Sergei Markov, today many scientific schools agree that AI is a field of science and technology that deals with the automation of solutions to intellectual problems.

“We are creating a system that can replace a person in this,” says the AI ​​developer. – The term “artificial intelligence” refers to an entire field of science and technology. At the same time, if we talk about ChatGPT, this is just one of the programs – a striking phenomenon, thanks to which a lot of people learned what generative language models are, transformers that someone teaches, and they solve a wide range of intellectual problems.

Sergey Markov





It turned out that neural network language modeling, what we perceive today as some kind of innovation, started as a phenomenon 20 years ago. The basis of the modeling, according to Markov, is statistical linguistics, which emerged as the idea that the meaning of a word is related to statistics and context.

“That is, what company a word is in determines its meaning,” says Markov.

Will Natural Intelligence Lose to AI?

The journalists gathered in the Science Cafe were very worried about whether our writing fraternity had a future in the light of the rapid development of artificial intelligence.

Stanislav Ashmanov reassured: a journalist has a thought before coming up with the right word. Not so with ChatGPT…

– The founder of information theory, American engineer Claude Shannon, was involved in experiments in statistical linguistics 70 years ago. For example, he took Shakespeare’s text and “walked” through it, writing down how often different words appeared with each other, explains Ashmanov. – <...> He obtained large statistics of recognizable words and structures with which he tried to generate text. True, it turned out to be complete nonsense – there was not enough breadth of context.

Stanislav Ashmanov





In the case of GPT, according to Ashmanov, the developers managed to expand this “window” of context – the number of words or pieces of them entered into the program became enormous. Now the program understands which word should come next, the texts will turn out more or less foldable, but the race to expand this “window” continues.

“In general, generating words is not what a journalist does,” the scientist sums up. – He doesn’t just juggle with words, he has a thought process, an understanding of what he writes about. But in any intellectual activity there is a certain routine, patterned actions, operations. These are the candidates for replacement with artificial intelligence. But the person still puts meaning into the work.

In turn, Sergei Markov expressed the idea that artificial intelligence can still catch up with natural intelligence, it’s just a matter of time.

People and machines are painted in the same world,” says Markov. – We are part of a single physical universe. Our nervous system is the same structure, consisting of atoms and molecules, and the neurons in our brain are determined by electrochemical signals.

In general, according to the AI ​​developer, he is more close to the point of view of the philosopher Diderot, who back in the 18th century said: “If I meet a parrot that will intelligently answer all my questions, then I will be obliged to recognize the presence of intelligence in it.” The English mathematician Alan Turing, who in 1950 proposed his famous Turing test, albeit in an application to a computer, was guided by the corresponding logic, only in application to a computer. It is based on an imitation game.

“There are two rooms,” says Markov. – A girl is hiding in one, a guy is hiding in the other. The presenter takes pieces of paper, writes a question and places them under their doors. In this case, the guy’s goal is to pass himself off as a girl, and the girl’s goal is to pass himself off as a boy. And then Turing argues: let’s take this procedure as a basis and replace the guy and girl with a machine and a person and see if the machine can pass itself off as a person? If so, then we will be obliged to recognize the presence of intelligence in the system we are testing. 73 years have passed since then and we see great progress. We see that systems based on ChatGPT4 are able to solve a variety of intellectual problems, maintain a sane dialogue, solve applied problems, but in general the line has not been crossed… Last year, 200 scientific groups prepared more than 300 types of tests, collecting them into a huge set of questionnaires , and we see from them that the most advanced neural networks have not reached the level of indistinguishability from people. There are many problems that people solve better. And in general, this is not surprising – our brain is an amazing “device” consisting of 86 billion neurons, a quadrillion synapses… To simulate each synapse, we need hundreds of thousands of binary elements – in general, the computing power of the human brain exceeds the capabilities of machines.

However, according to Markov, the computing power of computers is growing exponentially; today they have learned to beat a person at chess and the logical game Go. There are also examples in nature when, for example, the bee’s nervous system turns out to be more advanced than ours when searching for the optimal route. What are their similarities? To a high degree of specialization, “tailored” to solving a specific problem. In general, the goal of creating AI, according to the scientist, is only to expand the abilities of people!

Presenters and participant of the Science Cafe Natalya Efremova





Natalya Efremova summed up the discussion, expressing the idea that it is too early for humanity to measure its strength with ChatGPT. He, according to her, knows only what the Internet “knows,” and this is much less than what humanity as a whole knows and can do.

Scary future with AI

Yes, today artificial intelligence helps to maintain a file cabinet in a clinic, it acts as an assistant to a therapist, “expressing” its “opinion” to help the doctor, and analyzes large volumes of data in the interests of scientists. But people are still worried about uncertainty. Elena Bryzgalina, as an ethicist, described the main range of public concerns related to AI.

According to her, the issue of endowing artificial intelligence with will has not yet been resolved either from a technical or philosophical point of view. Until artificial intelligence has a will and its own goal-setting, it is a tool that must remain in our hands.

There is an idea to endow artificial intelligence with subjectivity, and this is the position of those countries that are currently leading the market (in this area), says the philosopher. – But there is a position of the Russian Federation, which believes that AI should never be considered as a subject. Because if I am a subject, I myself set a goal, choose the way to achieve it and bear responsibility for it.

Elena Bryzgalina





Bryzgalina gives examples of when you can trust decisions to artificial intelligence and when you can’t.

– We all once bought a phone. How did we make a decision before choosing one model or another? We analyzed, compared prices, characteristics and ultimately chose the one that met all the requirements. This task, in my opinion, can be entrusted to AI,” says Bryzgalina. – What are we not ready to give to artificial intelligence? We have all recently been in the situation of making or not making a decision about vaccination. How many experts did you listen to? Was it easier for you because almost everyone had their own view of the problem? It turned out that when we were faced with a new disease, with a new vaccine, no one had sufficient information. And what did each of us do?

We did not make the decision to vaccinate as an algorithmic decision—we were guided by a category of value. What was valuable to me personally? I have a sick mother, and I was ready, roughly speaking, to get vaccinated at least every day, just so as not to harm her. Some, based on similar categories of value, were ready to contribute to herd immunity, others were simply afraid of a new vaccination. That is, everyone made decisions not at the level of certain knowledge, but mostly based on their personal opinion and ethical considerations.

Elena Bryzgalina also raised the issue of artificial intelligence and personality development.

– I once asked my students what will they do when the Internet does everything for them? Do you know what they answered me? “Sleep!”. What we will do when AI frees up a lot of free time is one of the most important questions.

The philosopher noted that society needs to pay more attention to the development of the legal component in the field of artificial intelligence. In the meantime, according to her, the law has not kept pace with the development of high technologies.

So, the level of uncertainty – admiration and fear in connection with the development of artificial intelligence technologies is very high. This can be compared to the public’s feelings in the 19th century about the development of railway transport. Here is what the Bavarian Royal Medical Council concluded about fast train travel in 1837: “The construction of railways would be detrimental to public health. It is quite obvious that fast movement (41 km per hour) should cause brain disease in passengers, a kind of violent insanity…” Let’s hope that, both in the case of using trains and in the case of using AI, humanity will find the safest way for itself.

[ad_2]

Source link