OpenAI has created another team to ensure the safety of AI models

OpenAI has created another team to ensure the safety of AI models

[ad_1]

OpenAI created new teamwhich will be responsible for preventing the release of artificial intelligence models that pose a potential threat to humanity, be it financial risks or risks to human health.

The “team of preparedness specialists” will be headed by Alexander Madri, a professor at the Massachusetts Institute of Technology. The team was formed back in October, but OpenAI officially announced it only today.

This is the third OpenAI team that will be responsible for ensuring that the company’s AI models comply with security standards. Thus, the Safety Systems team is focused on preventing misuse of existing models and products such as ChatGPT. And the Superalignment team is working to develop safety standards for superintelligent models that should appear “in the distant future,” OpenAI says on its website.

In an interview Bloomberg Mr. Madri said his team will send monthly reports analyzing the company’s AI models to the new Homeland Security Council. He, in turn, will share findings from these reports with OpenAI management and directly with CEO Sam Altman.

Why in Russia they are trying to block the OpenAI search program – in the material “And he leaves her for a bot”.

Kirill Sarkhanyants

[ad_2]

Source link