Microsoft employee warns of possible risks of misuse of neural networks

Microsoft employee warns of possible risks of misuse of neural networks

[ad_1]

Yesterday, March 6, Microsoft employee Shane Jones sent letter to the US Federal Trade Commission and the company’s board of directors. In the letter, he warned of the wide potential for abuse of image-generating neural networks. Mr. Jones spent his free time testing the company’s AI-powered image generator, Copilot Designer. He has been with Microsoft for six years, but is not directly involved in the image generator work.

As Mr. Jones said in an interview with CNBC, he came to the conclusion that Copilot Designer creates images that are contrary to Microsoft policies – sexualized images, images of violence, including images of teenagers with guns, etc. In his opinion, among other things such a neural network easily violates copyrights by using certain characters in any context, including with political slogans, violence, etc. Mr. Jones was able to create images of this kind with the participation of cartoon characters – for example, Elsa from “Frozen” and pro-Palestinian or, conversely, pro-Israeli slogans.

According to Mr. Jones, he came to the conclusion that the neural network was dangerous and informed the company’s management about this, but they did nothing. Microsoft acknowledged the problem, but refused to suspend public use of Copilot Designer to make changes.

Read about the decline in interest in AI in the socio-political sphere in the Kommersant publication “Artificial intelligence did not come out”.

Yana Rozhdestvenskaya

[ad_2]

Source link