“AI carries dangers, but not at all the ones that everyone is talking about”

“AI carries dangers, but not at all the ones that everyone is talking about”

[ad_1]

After a high-profile story with a student who wrote a diploma using ChatGPT, the debate continues in society about the dangers of artificial intelligence (AI) for education and life in general. Head of the Department of Big Data and Information Retrieval, Faculty of Computer Science, Higher School of Economics Evgeny Sokolov tells how AI works and what you should really be afraid of working with it.

Today, many people talk about the dangers of creating a strong artificial intelligence, about certain risks and opportunities. In my opinion, the risk is not at all that AI will take over humanity, and the potential advantages are not at all that we are about to get some kind of artificial intelligence that can replace the natural one.

Surely you have heard the story, as a student of the RSUH wrote my thesis using ChatGPT and even successfully defended it. How much panic there was then – they said that everything, education can be canceled! Some kind of piece of iron can solve such complex problems, horror! But let’s figure out how exactly AI works on the example of this particular story with a diploma.

ChatGPT is based on language models. That is, you give the algorithms a certain text, they predict which word will look most logical next. For example, the user asked the AI ​​how much twice two would be, and the program answered him in text.

You can look at language models as formulas into which we substitute text and get the next word. These formulas have parameters – special settings, roughly speaking. And if you choose them correctly, the model will work well.

Of course, this is just some foundation, on top of which you need to build a lot of things in order for it to work. But if it’s completely superficial, then ChatGPT and other language models work that way. They can support a dialogue, they can give a brief explanation of a particular topic, they can solve mathematical problems. And pay attention – these skills were not laid in them in some special way, they were simply taught to continue the text word by word.

Models like ChatGPT are extremely good at creating texts. They can generate a formal letter based on a short, informal description from us. Write a beautiful letter of recommendation. Prepare a response to the complaint you need to process. “Comb” the text, “pour water” – or, conversely, shorten it, make a brief squeeze. We can use the same ChatGPT as an assistant.

But these AI models also have downsides. For example, they greatly simplify the generation of fake texts.

Using the algorithm, in a couple of seconds, you can write a large encyclopedic text about a non-existent breed of dog and litter the entire Internet with such texts. And it is not clear how people will find out whether such dogs exist in nature or not. Maybe they live on another continent, in another country? Checking such facts is extremely non-trivial, it is a lot of work. The Internet is already famous for being full of lies, and now it is even easier to generate lies than before. Therefore, in the era of the development of artificial intelligence, it is extremely important to develop critical thinking, learn by yourself and teach students to pay attention to what they read, what they get acquainted with: if we are not critical of the information we receive, it is highly likely that we will simply get confused in this world.

Another important nuance: language models do not contain the understanding that the information they provide should correspond to reality and not contradict the fundamental laws of physics or mathematics. We, living people, have the opportunity to check it. We are able to establish the truth because we interact with the real world every second. Some statements are easy to verify, some require a lot of work – but at least it is realizable. Language models interact with it only through the texts that were seen during training, and these texts could well be incorrect, not corresponding to reality.

For this reason, we cannot yet trust AI to teach us, and in general it is hardly possible to entrust responsible tasks to it.

Despite the shortcomings, the prospects for AI are very large. So, a significant part of the work of teachers is checking the work of students. Even in the most rigorous areas of science, we prefer to give students creative tasks where they need to think, analyze, and propose an idea. And, of course, now it has to be checked manually. Linguistic models have every chance to help us – to find flaws in solutions, to give students detailed feedback. To train such models, we need to collect a large sample of students’ decisions along with the results of the test – but this is hardly an unsolvable task.

Language models can also be useful for speeding up the analysis of information. Today, new scientific articles in every field are published every day. A huge number of courses are taught in the same areas at different universities – and for sure in each of them there is an explanation that is more successful than others. There is so much information out there that it is impossible to read and digest it all. If we teach language models to distinguish truth from fiction (and ensure that they do not make mistakes in the facts), we will get an extremely powerful tool that will allow, for example, to find out in an hour a short summary of all the results in any field of science. I think this would allow scientists to reach a new speed in research.

As for students and theses written with the help of AI, my opinion is: let the authors use whatever they want, but at the same time bear full responsibility for everything that is written in the text. One overlooked paragraph, one careless request to the linguistic model – and it will assert in the text of the diploma that two times two equals five. Personally, I will reduce the rating for this in all severity.

Moreover, what we demand from students is not the text itself, we demand the result – something new, a non-trivial idea that had never occurred to anyone before or that no one could comprehend to the end. AI is not capable of this. In any case, cases are still unknown for a language model to be able to do something new in science. And if the student manages to pull something new out of the language model, let him get the highest mark – he was the first to figure out how to do it, and we will only be happy. Just let him explain how he did it.

Let’s summarize. In my opinion, AI carries dangers, but not at all those that everyone is talking about now. It is unlikely that artificial intelligence will appear in the near future, which will capture humanity.

I personally see the main threat in the fact that AI will misinform people and present false information as truth. At the same time, AI opens up new possibilities for us.

Firstly, it can become our assistant in routine tasks. Secondly, AI generates a huge number of research questions, which will definitely be enough for our century. We were able to successfully train linguistic models – but we don’t understand at all why they work the way they do, what their limitations are, how to deal with their shortcomings. And we have every chance to find answers to these questions.

[ad_2]

Source link

تحميل سكس مترجم hdxxxvideo.mobi نياكه رومانسيه bangoli blue flim videomegaporn.mobi doctor and patient sex video hintia comics hentaicredo.com menat hentai kambikutta tastymovie.mobi hdmovies3 blacked raw.com pimpmpegs.com sarasalu.com celina jaitley captaintube.info tamil rockers.le redtube video free-xxx-porn.net tamanna naked images pussyspace.com indianpornsearch.com sri devi sex videos أحضان سكس fucking-porn.org ينيك بنته all telugu heroines sex videos pornfactory.mobi sleepwalking porn hind porn hindisexyporn.com sexy video download picture www sexvibeos indianbluetube.com tamil adult movies سكس يابانى جديد hot-sex-porno.com موقع نيك عربي xnxx malayalam actress popsexy.net bangla blue film xxx indian porn movie download mobporno.org x vudeos com