The national project on the data economy may include rules on the regulation of artificial intelligence

The national project on the data economy may include rules on the regulation of artificial intelligence

[ad_1]

The national project on the data economy, which the government must approve by July 2024, may include regulations on the regulation of artificial intelligence (AI) and large language models. Related issues—in particular, responsibility for mistakes made by AI—were discussed in an expert group at the Digital Economy ANO. Experts and market participants confirm that regulatory gaps are preventing the widespread adoption of AI. But an attempt at “regulation from above” can scare off developers.

Participants in the expert group at ANO Digital Economy (ANO TsE) discussed the inclusion in the national project on the data economy until 2030 of provisions relating to the implementation of AI in industries, follows from the presentation available to Kommersant. Among the proposals is to form a “single AI regulator” in Russia. The powers of the AI ​​regulator have not been disclosed. Its main part is devoted to large language models (LLM; trained on text arrays and used in neural network services). The authors proposed to refine the regulation in the Russian Federation – in particular, to clarify who will be responsible for errors made as a result of the use of AI.

Vladimir Putin presented the idea of ​​a national project on the data economy in July; an official order was published in September, according to which the government must approve the national project by July 1, 2024. Russia already has a federal project on AI (as part of the Digital Economy national program ending in 2024), most of its indicators relate to the implementation of AI in industry, the activities of ministries and departments, as well as scientific issues.

ANO CE reported that they are in contact with business on the topic of the data economy, but “it is premature to talk about which proposals will be taken into consideration.” The head of the center for regulation of artificial intelligence at Sberbank, Andrey Neznamov, noted that “it is optimal to pilot any regulation in the mode of experimental legal regimes, taking into account the ethics of AI.” At Yandex, which develops generative neural networks, both LLM and the Ministry of Digital Development did not answer Kommersant.

Now, if wrong decisions are made as a result of the use of AI, it is not always clear who is responsible – the customer of the model, the developer or the operator (for example, the author of the request), confirmed Just AI Product Director Gleb Oblomsky. The lack of regulation, he said, creates a risk of misuse of AI and violation of the rights and interests of users. But if we assume that a neural network is a tool, then “the rights and responsibility for the result that it created lies with the person who wrote the program or request,” says Larisa Malkova, managing director of the Data and Applied AI practice at Axenix.

At the same time, Kommersant’s source in a large IT company believes that at the current stage of AI development in the Russian Federation, “any attempt at regulation can create legal barriers.” According to him, there is a risk that by default only developers will be held accountable, and this could hinder innovation.

The most pressing regulatory problem for the development of AI in the Russian Federation is the virtually absent legal status of working with anonymized data, the Big Data Association (which unites Yandex, VK, Rostelecom, MegaFon, etc.) told Kommersant. They clarified that the concept of anonymized data is not disclosed in the legal field, and the methods of anonymization recommended by Roskomnadzor are not technically such: “These are pseudonymization methods, they are unsafe from a technological point of view.”

Participants in the discussion at ANO TsE, according to the presentation, also proposed lifting restrictions on organizations that are currently prohibited from using commercial cloud services so that they can train their own AI models. But the restrictions (federal ministries and departments, law enforcement agencies, military-industrial complex enterprises fall under them) apply only to public clouds, Oxygen CEO Pavel Kulakov clarified: “Nothing prevents them from ordering the implementation of a private cloud, the data in which will be inside a protected circuit. But this often contradicts the requirements that organizations set for themselves.”

Mr. Oblomsky added that the lack of regulation sometimes serves as an advantage, as it allows developers to experiment: “Other countries are in no hurry to regulate because they are afraid of losing in the AI ​​race, although they are actively discussing them with the market.” In the Russian Federation, the market, says a Kommersant source in large business, has no request for a revision of regulation, “there are specific issues that are resolved by practice.”

Yuri Litvinenko, Nikita Korolev

[ad_2]

Source link