Experts warn of the danger of the destruction of mankind by artificial intelligence

Experts warn of the danger of the destruction of mankind by artificial intelligence

[ad_1]

Experts call for regulation to prevent the destruction of humanity by artificial intelligence. The OpenAI team behind the ChatGPT artificial intelligence chatbot says the equivalent of a nuclear watchdog is needed to guard against the risks posed by “superintelligent” AI.

The leaders of OpenAI, the developer of ChatGPT, have called for the regulation of “superintelligent” artificial intelligence (AI), arguing that an International Atomic Energy Agency equivalent is needed to protect humanity from the risk of accidentally creating something that could destroy it.

According to The Guardian, in a short note published on the company’s website, co-founders Greg Brockman and Ilya Sutzkever and chief executive Sam Altman are calling on the international regulator to start working on how to “inspect systems, require audits, test for compliance with standards.” security and set limits on the degree of protection” in order to reduce the “existential risk” that such systems may pose.

“It is quite possible that within the next 10 years, artificial intelligence systems will surpass the level of expertise of experts in most fields and will perform the same productive activities as one of the largest corporations of our time,” they write.

“In terms of both potential advantages and disadvantages, superintelligence will be more powerful than other technologies that humanity has had to deal with in the past,” the authors say. “We can have a much better future, but we must manage risk to get there. Given the possibility of existential risk, we cannot simply react.”

In the short term, the authors call for “some degree of coordination” between companies working on cutting-edge AI research to ensure that increasingly powerful model development seamlessly integrates with society while prioritizing security. Such coordination could be carried out, for example, through a government project or through a collective agreement aimed at limiting the growth of artificial intelligence capabilities.

Researchers have been warning about the potential risks associated with superintelligence for decades, but as the development of artificial intelligence has gained momentum, these risks have become more specific.

The US-based Center for Artificial Intelligence Security (CAIS), which works to “reduce the risks associated with artificial intelligence across the society,” describes eight categories of “catastrophic” and “existential” risks that AI development can pose.

While some worry that a powerful artificial intelligence will completely wipe out humanity, whether by accident or design, CAIS describes other, more detrimental consequences.

A world in which more and more labor is voluntarily handed over to AI systems could result in humanity “losing its ability to self-govern and become completely dependent on machines”, which is described as “weakening”. And a small group of people controlling powerful systems could “make AI a centralizing force,” leading to “value anchoring,” an eternal caste system between the ruled and the rulers.

OpenAI executives say these risks mean that “people around the world should democratically define boundaries and defaults for AI systems,” but admit that “we don’t know how to develop such a mechanism yet.” However, they say that continuing to develop powerful systems is worth the risk.

“We believe that this will lead to a much better world than we can imagine today (we are already seeing the first examples of this in areas such as education, creative work and personal productivity),” the experts write.

They warn that stopping development can also be dangerous. “Because the benefits are so huge, the cost of building it goes down every year, the number of contributors building it is growing rapidly, and it’s an integral part of the technology path we’re on. Something like a global watch mode would be required to stop this, and even that isn’t guaranteed to work. So we have to do it right.”

Read more: Artificial intelligence will learn to read minds: scientists got scared

[ad_2]

Source link

تحميل سكس مترجم hdxxxvideo.mobi نياكه رومانسيه bangoli blue flim videomegaporn.mobi doctor and patient sex video hintia comics hentaicredo.com menat hentai kambikutta tastymovie.mobi hdmovies3 blacked raw.com pimpmpegs.com sarasalu.com celina jaitley captaintube.info tamil rockers.le redtube video free-xxx-porn.net tamanna naked images pussyspace.com indianpornsearch.com sri devi sex videos أحضان سكس fucking-porn.org ينيك بنته all telugu heroines sex videos pornfactory.mobi sleepwalking porn hind porn hindisexyporn.com sexy video download picture www sexvibeos indianbluetube.com tamil adult movies سكس يابانى جديد hot-sex-porno.com موقع نيك عربي xnxx malayalam actress popsexy.net bangla blue film xxx indian porn movie download mobporno.org x vudeos com