how different countries approach the regulation of the sphere

how different countries approach the regulation of the sphere

[ad_1]

With the very rapid proliferation of ChatGPT and other artificial intelligence (AI) chatbots, there are more and more questions about the possibilities of fraud, copyright infringement and other misconduct and even crimes. Different countries approach the regulation of this sphere in different ways. In the US, the EU, China and some other countries, norms for regulating generative AI are still being developed. In Italy, ChatGPT was completely banned, while in India, on the contrary, they are not going to regulate AI systems in any way.

ChatGPT, an advanced artificial intelligence chatbot from OpenAI that has been dubbed the “Google killer,” was launched last November. Six months was enough for the world to start something akin to an “arms race” at the height of the Cold War. More and more companies are reporting on integrating chatbots into their products. News about the emergence of their own chat bots based on generative AI technologies (for example, bard from the same Google or versions from Chinese companies Baidu and Alibaba), or the beginning of the development of such become regular. Even though, so far, ChatGPT’s rivals are in many ways inferior to it.

At the same time, there are many questions about this technology. They concern copyright compliance, the dangers of plagiarism and fraud. Convinced opponents of this race also appear.

Recently, the non-profit public organization Future of Life published open letter with a call to “pause for at least six months” in the introduction of new advanced AI systems of this kind. According to the authors of the letter, including Apple co-founder Steve Wozniak and Tesla founder Elon Musk, before rushing to implement such systems, it is necessary to develop security protocols for such systems and control over them.

This statement is supported by the authorities of many countries, who are developing rules governing the operation of chatbots.

Chatbot socialist

This week, the China Cyberspace Administration presented your draft rules for chatbots. While noting China’s support for innovation and the popularization of technology, the rules insist that chatbot-generated content comply with “core socialist values” as well as data and personal information protection laws. This content should not contradict the norms of public morality or “contain calls for undermining state power, overthrowing the socialist system, inciting to divide the country, undermining national unity.”

Chinese chatbots will have to undergo security checks by government agencies, and the content they create must be labeled. Chatbots will need to verify users. In addition, their developers will have to take full responsibility for the content that their AI creates, as well as for the handling of user data. Among other things, such content must be factual and must not discriminate against anyone based on race, beliefs, etc.

Many experts believe that with the adoption of these rules, strict Chinese censorship will spread to chatbots. “There is no need to be under any illusions. The party will use Generative AI Principles for the same purposes of censorship, surveillance and information manipulation that it has tried to justify with other laws and regulations,” said Michael Caster, head of the Asia program for the human rights organization Article 19.

In his opinion, these rules will be used to prevent Chinese users from accessing any prohibited information, such as how to use a VPN.

According to 86Research analyst Charlie Chai, these rules are likely to encourage the spread of local chatbots and prevent their foreign counterparts from entering the Chinese market, as Chinese players are more likely to develop such systems in accordance with the regulations in force in the country.

Chatbot Capitalist

The US government also consider the ability to develop regulations for the latest artificial intelligence (AI) technologies such as ChatGPT. This was reported by The Wall Street Journal this week. According to her, the US Department of Commerce has begun consultations with experts about the potential risks of using such tools, their certification, etc. It wants to understand if there are such regulatory measures that can ensure that the “AI systems used comply with laws and ethics, effective, safe and generally trustworthy.”

After consultations and the collection of opinions, which will last 60 days, the Ministry will submit the findings to the administration of US President Joseph Biden, which may initiate the development of a relevant bill.

At the end of March, the UK Department for Science, Innovation and Technology published their proposals for regulating such AI systems. The ministry noted that “AI regulation is aimed at building public confidence in the latest technologies.” During the year, local regulators are going to create a practical guide for the development of legislation in this area.

When developing rules, the British authorities are going to be guided by five principles: safety; transparency and accessibility; honesty and justice; accountability and responsibility; the possibility of litigation and correction of errors.

Among other things, this means that developers of AI systems should, if necessary, clearly explain how their systems work, what risks may be and how they are taken into account, and in case of violations, be held accountable for their consequences.

The EU is also developing its own rules in this area – European legislators are known for very thorough and rather strict regulation of various technological areas. The European AI law, already drafted, provides for the regulation of all types of AI and all sectors in which it can be applied, except for the military. According to it, all types of AI should be classified according to the level of risk from low to unacceptable, different requirements will apply for different levels of risk, types of AI and applications. In addition, other European laws, such as the EU data protection law, will apply in this area.

AI regulation law being developed and in Russia.

Chatbots shackled and liberated

While most countries are only developing laws in this area, Italy has taken a more radical step. In early April, she temporarily banned using ChatGPT. The Italian Office for the Protection of Personal Data justified its decision by saying that the chatbot violates the law on personal data. First of all, this is due to a malfunction in ChatGPT, which led to the leakage of personal data and user dialogs. Now OpenAI must tell the Italian authorities how it intends to deal with data privacy issues.

According to some experts, such a decision by Italy could become an example for tighter regulation of ChatGPT in the EU as a whole. There is a reason for this: chatbots are developing faster than the regulatory rules in this area are being created.

In India, the authorities have already said that they are not going, at least for now, to regulate chatbots. In early April, India’s Ministry of Electronics and Information Technology announced that it was “not considering enacting a law or regulation to regulate the growth of AI in the country.”

According to the ministry, AI is a factor contributing to the development of the digital economy, although there are certain ethical concerns about the bias and opacity of such AI systems. This does not mean that the authorities will completely remove themselves from any kind of control. But in the Indian version, the norms will be more of a recommendatory nature.

“The challenges posed by generative AI, both in terms of possible abuse and in terms of commercial use, are in many ways relatively recent, and it is not yet clear which policy will be best in relation to them,”— thinks Alex Engler is a researcher at the Brookings Institution who focuses on governance issues related to technological innovation. In his opinion, the rules in this area may include obligations to exchange information to reduce the risks of commercialization, as well as requirements for risk management, which should mitigate the consequences of possible abuse. “None of these measures are a panacea, but they are reasonable requirements for such companies that can improve their social performance,” Mr. Engler said.

Yana Rozhdestvenskaya

[ad_2]

Source link

تحميل سكس مترجم hdxxxvideo.mobi نياكه رومانسيه bangoli blue flim videomegaporn.mobi doctor and patient sex video hintia comics hentaicredo.com menat hentai kambikutta tastymovie.mobi hdmovies3 blacked raw.com pimpmpegs.com sarasalu.com celina jaitley captaintube.info tamil rockers.le redtube video free-xxx-porn.net tamanna naked images pussyspace.com indianpornsearch.com sri devi sex videos أحضان سكس fucking-porn.org ينيك بنته all telugu heroines sex videos pornfactory.mobi sleepwalking porn hind porn hindisexyporn.com sexy video download picture www sexvibeos indianbluetube.com tamil adult movies سكس يابانى جديد hot-sex-porno.com موقع نيك عربي xnxx malayalam actress popsexy.net bangla blue film xxx indian porn movie download mobporno.org x vudeos com