The European Parliament adopted the first law regulating the use of artificial intelligence

The European Parliament adopted the first law regulating the use of artificial intelligence

[ad_1]

The world’s first law regulating the use of artificial intelligence, agreed upon The European Parliament and representatives of 27 EU countries. How Russian AI developers accepted it, a MK correspondent found out.

Clear rules for the use of AI, which will “ensure security and respect for fundamental rights while stimulating innovation,” were agreed in negotiations with EU member states in December 2023. Last Friday, the AI ​​law was agreed upon by members of the European Parliament.

Let’s look at all the apps that are banned:

1. Biometric identification systems;

2. Inappropriate extraction of facial images from the Internet or recordings from CCTV cameras to create facial recognition databases;

3. Recognizing emotions in the workplace and in schools;

4. Social rating;

5. Predictive policing, that is, creating a profile of a potential criminal;

6. Artificial intelligence that manipulates human behavior or exploits its vulnerabilities.

However, some exceptions are made for law enforcement agencies:

For example, they will be allowed to carry out biometric identification “in narrowly defined situations.” The remote scanner will be able to be deployed “in real time” only for a certain time and in the geographic scope determined by the court. Such uses could include, for example, the targeted search for a missing person or the prevention of a terrorist attack.

What would be considered high risk when using AI:

Uses include critical infrastructure (such as autonomous vehicles), education and training, employment, healthcare, banking, border management, justice and the electoral process. Such systems must assess and mitigate risk, maintain usage logs, be transparent and accurate, and provide human oversight. Citizens will have the right to file complaints against AI systems and receive clarification regarding decisions based on high-risk AI systems that affect their rights.

When creating generative AI based on chatbots such as ChatGPT, their creators will have to disclose information about who their “teachers” were, in other words, whose texts or pictures they learned from. In addition, artificial or manipulated images, audio or video content (“deepfakes”) must be clearly marked as such. In this case, we are talking about special marks, “watermarks,” that will help distinguish the generated picture or video from the real one.

“Thanks to Parliament, unacceptable artificial intelligence practices will be banned in Europe, and the rights of workers and citizens will be protected,” said Italian member of the Internal Market Committee Brando Benifei. According to him, an AI Office will soon be created to help companies begin to comply with the rules before they come into force in early 2025.

Comment by Roman Dushkin, chief architect of artificial intelligence systems at NRNU MEPhI:

– The Eurobureaucracy, in its own style, wants to regulate everything and everyone. We see in this law significant prohibitions on the use of technology. Of course, every society seeks a balance between freedom and security, but if a person is dying on the street, artificial intelligence systems will not be able to recognize that he needs help here and now. In my opinion, these are quite controversial decisions. European officials have taken the path of putting pressure on artificial intelligence developers.

In Russia we have two areas of such regulation – on the one hand, there is soft smart regulation in the form of a code of ethics in the field of artificial intelligence, which is signed by the developers of artificial intelligence systems, and they, as it were, independently impose some restrictions on themselves. On the other hand, we have quite serious technical regulation, state and national standards that regulate the use of artificial intelligence technologies.

It is no secret that many technical systems pose increased danger, including those controlled by artificial intelligence. And they just need to be certified so that society can trust them. This certification should ensure the quality of the systems. The two areas of regulation existing in Russia are aimed at solving two problems: to ensure the trust of society and the state in AI, but not to regulate it, as they do in Europe. I am sure that the artificial intelligence industry in Europe will develop with great difficulty. I hope this doesn’t happen to us.

[ad_2]

Source link