“Godfathers” of artificial intelligence called for introducing responsibility for its harm

"Godfathers" of artificial intelligence called for introducing responsibility for its harm

[ad_1]

Powerful artificial intelligence systems threaten social stability – and artificial intelligence companies must be held accountable for the harm caused by their products, a group of senior experts, including two of the technology’s godfathers, has warned.

Tuesday’s remarks come as international politicians, tech companies, scientists and civil society figures prepare to gather at Bletchley Park next week for a summit on artificial intelligence security, The Guardian writes.

A co-author of policy proposals from 23 experts argues that it was “completely foolhardy” to develop increasingly powerful artificial intelligence systems before understanding how to make them safe.

“It’s time to take advanced artificial intelligence systems seriously,” said Stuart Russell, a professor of computer science at the University of California, Berkeley. – These are not toys. Expanding their capabilities before we understand how to make them safe is completely reckless.”

The scientist added: “There are more rules for sandwich shops than for artificial intelligence companies.”

The document urged governments to adopt a range of policies, including: Governments should dedicate a third of their AI R&D funding, and companies a third of their AI R&D resources, to the safe and ethical use of systems. Independent auditors should be provided with access to artificial intelligence laboratories. A licensing system must be created to create advanced models. Artificial intelligence companies must take special security measures if dangerous features are found in their models. Tech companies should be held accountable for foreseeable and preventable harm from their artificial intelligence systems.

Other co-authors of the paper include Geoffrey Hinton and Yoshua Bengio, two of the three “Godfathers of AI” who received the ACM Turing award—the equivalent of the Nobel Prize in computer science—in 2018 for their work on AI.

Both are among the 100 guests invited to the summit. Hinton resigned from Google this year to sound a warning about what he called the “existential risk” posed by digital intelligence, while Bengio, a professor of computer science at the University of Montreal, joined him and thousands of other experts in signing a letter in March calling for a moratorium on giant experiments with artificial intelligence.

Other co-authors of the proposals include bestselling Sapiens author Noah Yuval Harari, Daniel Kahneman, a Nobel Prize-winning economist, and Sheila McIlraith, a professor of artificial intelligence at the University of Toronto, as well as award-winning Chinese computer scientist Andy Yao.

The authors warn that poorly designed artificial intelligence systems threaten to “reinforce social injustice, undermine our professions, undermine social stability, enable large-scale criminal or terrorist activity, and weaken our shared understanding of reality that is fundamental to society.”

Scientists have warned that existing artificial intelligence systems are already showing signs of alarming capabilities that point the way to the emergence of autonomous systems capable of planning, pursuing goals and “acting in the world.” The GPT-4 artificial intelligence model, which powers the ChatGPT tool developed by US firm OpenAI, enables the design and execution of chemistry experiments, web browsing and the use of software tools, including other artificial intelligence models, experts say.

“If we create highly advanced autonomous artificial intelligence, we risk creating systems that autonomously pursue unwanted goals,” adding that “we may not be able to keep them under control.”

Other policy recommendations contained in the document include: mandatory reporting of incidents where models exhibit disturbing behavior; taking measures to prevent the self-reproduction of dangerous models; and giving regulators the power to suspend the development of artificial intelligence models that exhibit dangerous behavior.

Next week’s security summit will focus on existential threats posed by artificial intelligence, such as facilitating the development of new biological weapons and escaping human control, The Guardian notes. The UK government is working with others on a statement that is expected to highlight the scale of the threat posed by “frontier AI,” a term for advanced systems. However, while the summit will outline the risks posed by artificial intelligence and measures to combat the threat, it is not expected to formally create a global regulator.

Some artificial intelligence experts say fears of an existential threat to humans are overblown. Another 2018 Turing Award winner, along with Bengio and Hinton, Yann LeCun, told the Financial Times that the idea that AI could destroy humans is “absurd.”

However, the authors of the policy paper argue that if advanced autonomous artificial intelligence systems were available now, the world would not know how to make them safe or conduct safety tests on them. “Even if we did, most countries lack the institutions to prevent abuse and enforce safe practices,” they added.

[ad_2]

Source link

تحميل سكس مترجم hdxxxvideo.mobi نياكه رومانسيه bangoli blue flim videomegaporn.mobi doctor and patient sex video hintia comics hentaicredo.com menat hentai kambikutta tastymovie.mobi hdmovies3 blacked raw.com pimpmpegs.com sarasalu.com celina jaitley captaintube.info tamil rockers.le redtube video free-xxx-porn.net tamanna naked images pussyspace.com indianpornsearch.com sri devi sex videos أحضان سكس fucking-porn.org ينيك بنته all telugu heroines sex videos pornfactory.mobi sleepwalking porn hind porn hindisexyporn.com sexy video download picture www sexvibeos indianbluetube.com tamil adult movies سكس يابانى جديد hot-sex-porno.com موقع نيك عربي xnxx malayalam actress popsexy.net bangla blue film xxx indian porn movie download mobporno.org x vudeos com