Column by Yuri Litvinenko about secret training of neural networks

Column by Yuri Litvinenko about secret training of neural networks

[ad_1]

Neural network technologies, which are positioned as a way to reduce the need for human labor, so far only increase its value, including because the networks still train the brains of living people. But it is clearly not rusty yet: users are increasingly thinking that every action they take can be used to improve the algorithm.

Such concerns arise especially often in situations directly related to working with artificial intelligence (AI). For example, among those who unsuccessfully tried to get a job as an AI trainer, vacancies for such positions, in particular, are posted by Tinkoff Bank and Yandex. We are talking about describing the data for neural networks and evaluating the generation results. Some of those who have been rejected after completing a “test task” for many hours believe that the company has already used the results of the task for training purposes – for free, of course. The theory is supported by the fact that the test task was preceded by a request for consent to process personal data (PD), which not everyone read.

Doubts do not arise out of nowhere. The practice, for example, of “translation competitions”, in which each applicant was given a chapter, and in the end the publisher received a finished book, is well known. There are examples in other industries. Therefore, after a friend shared these thoughts regarding the “test task” for the vacancy of a specialist in improving search results in Yandex Crowd, I decided to find out.

Yandex assured that the data is needed only to assess the candidate’s competencies, is not used anywhere, and is automatically deleted within a few weeks, “but this can be done immediately at the candidate’s request.” Without expressing doubts about the veracity of Yandex, I still want to understand: to what extent would such a practice be legal?

Social networks, content aggregators and other online platforms are required by the law on recommendation services (which came into force in October) to disclose how they use data about user actions and what technologies they use when processing it. But systems that do not recommend anything, but only collect information from him, are not subject to the law.

Should operators inform the user about the fact that they are training neural networks? Olga Zakharova, head of the personal data protection practice at the DRC law firm, believes that yes – in the personal data processing policy. But Alexandra Orekhovich, a teacher at the Moscow Digital School and director of legal initiatives for the IIDF, believes that if the data is anonymized and used for research purposes, consent is not required. Whether the regulator will in practice classify neural network training as research purposes is also unclear; their criteria have not been established. So, theoretically, everything could be completely legal.

[ad_2]

Source link