Experts analyzed the first experience of state regulation of AI

Experts analyzed the first experience of state regulation of AI

[ad_1]

In the field of artificial intelligence (AI), 2023 was not only a period of surge of interest in GPT, but also the year of the first experience of legislative regulation of this technology. The Institute for Statistical Research and Economics of Knowledge (ISSEK) of the National Research University Higher School of Economics compared legal acts adopted by the three largest players in this market: the USA, the EU and China. From the analysis, we can conclude that Europe focuses on grading the risks of AI systems, the USA on standards and data security, China on strict statist rules regarding AI-generated content.

ISSEK experts compared the first legislative acts in the field of artificial intelligence that have already been adopted or are in a high degree of agreement. In the European Union this is the AI ​​Act, in China it is the “Temporary Measures for the Management of Generative Artificial Intelligence Systems”, in the USA it is the “Decree on Safe, Reliable and Trustworthy AI”.

The provisions of the European act were agreed recently, in December 2023, by the European Parliament and the European Council, and now it requires the approval of EU member states. The law is expected to come into force no earlier than 2025. It is based on grading AI systems by risk level. Most of the regulatory measures introduced will affect high-risk systems – their developers will be required to register their products in a pan-European database before being placed on the market or put into operation. Developers of limited risk systems must ensure transparency of processes, but for systems with low or minimal risk there are no restrictions. At the same time, the European bill explicitly prohibits “malicious AI practices.” There are large fines for abuse: €35 million, or 7% of the company’s annual turnover, for violating the ban on the use of AI; €15 million, or 3% of turnover, for violation of obligations under the AI ​​law; €7.5 million, or 1.5%, for providing false information to regulators. Special rules are introduced for the so-called fundamental models that can create videos, texts, images and program code. Generative AI systems (such as ChatGPT) fall into this group: their developers will be required to disclose which copyrights were used to train the model.

The American document, signed by US President Joe Biden in October 2023, is similar to the European one in terms of requirements for developers of AI systems regarding transparency of processes (see “Kommersant” dated October 31, 2023). Companies must provide the government with data on safety testing results and other critical information before a product goes to market. To improve security, the National Institute of Standards and Technology will develop standards, and the US Department of Commerce will develop guidelines for labeling AI-generated content (to prevent the spread of content generated for disinformation). The decree pays special attention to improving the security of personal data.

The Chinese “Temporary Measures” adopted in the summer of 2023 provide for the responsibility of AI developers for all generated content. A ban has been introduced on content that undermines socialist values ​​or incites the overthrow of the state system. The developer is responsible for protecting users’ personal data and intellectual property rights. The rules also establish the need to prevent minors from over-reliance on the use of generative AI.

As for Russia, the regulatory framework for regulating AI is just being prepared. United Russia is writing a bill that should determine the responsibility of developers and exclude cases of AI being used by fraudsters (see Kommersant, April 14, 2023). In December 2023, the government introduced a bill to the State Duma that would oblige AI developers operating in an experimental legal regime to insure liability for possible harm to life, health or property when using this technology (see “Kommersant” dated December 15, 2023). Russia has yet to choose what regulation of technology should be: stimulating its development or preventing risks. ISSEK notes that “those countries that can, including at the legislative level, find the optimal balance between supporting AI and limiting it” will be able to achieve leadership.”

Venera Petrova, Vadim Visloguzov

[ad_2]

Source link