Dangerous toys: The Pentagon began to improve killer machines based on artificial intelligence

Dangerous toys: The Pentagon began to improve killer machines based on artificial intelligence

[ad_1]

Pentagon officials have sounded the alarm about “unique classes of vulnerabilities to artificial intelligence or autonomous systems” that they hope the new research will address.

According to the Daily Mail, the program, called Guaranteing AI Robustness against Deception (GARD), from 2022 is tasked with determining how visual data or other electronic signals entering an artificial intelligence system can be changed using the calculated introduction of noise.

Computer scientists at one of the defense contractors, GARD, have been experimenting with kaleidoscopic patches designed to trick artificial intelligence systems into creating fake identities.

“Essentially, you can, by adding noise to an image or a sensor, disrupt the machine learning algorithm,” a senior Pentagon official who led the research explained recently.

The news, the Daily Mail notes, comes amid fears that the Pentagon is “creating killer robots in the basement”, which is said to have led to stricter artificial intelligence rules for the US military, requiring all systems to be approved before deployment. .

“By knowing this algorithm, you can also sometimes create physically executable attacks,” added Matt Turek, deputy director of the Defense Advanced Research Projects Agency’s (DARPA) Information Innovation Office.

It is technically possible to “trick” an AI algorithm into making critical errors – causing the AI ​​to incorrectly identify various patterned patches or stickers for a real physical object that is not actually there.

For example, a bus full of civilians could be mistakenly identified by AI as a tank if it were tagged with the right “visual noise,” as one national security reporter working for the website ClearanceJobs suggested as an example.

In short, such cheap and lightweight “making noise” tactics can lead to vital military AI mistaking enemy fighters for allies during a critical mission and vice versa.

Researchers in the modestly budgeted GARD program have spent $51,000 studying surveillance and signal jamming tactics since 2022, Pentagon audits show.

Their work was published in a 2019-2020 study illustrating how visual noise that may appear merely decorative or inconsequential to the human eye, such as a 1990s Magic Eye poster, can be interpreted by artificial intelligence as a solid object.

Computer scientists at defense contractor MITER Corporation managed to create visual noise that artificial intelligence mistook for apples on a grocery store shelf, a bag left on the street, and even people.

“Whether it’s physical attacks or noise patterns that are added to artificial intelligence systems,” Turek said Wednesday, “the GARD program has created state-of-the-art defenses against them.”

“Some of these tools and capabilities were provided by CDAO [Офис искусственного интеллекта]”Turek says.

The Pentagon created the CDAO in 2022; it serves as a hub to facilitate faster adoption of artificial intelligence and related machine learning technologies in the military.

The Department of Defense recently updated its rules on artificial intelligence despite “much confusion” about how it plans to use machines that make autonomous decisions on the battlefield, according to Deputy Assistant Secretary of Defense for Force Development and New Capabilities Michael Horowitz. .

Horowitz explained at an event in January this year that “the directive does not prohibit the development of any artificial intelligence systems” but will “clarify what is and is not permitted” and will support a “commitment to responsible behavior” in the development of lethal autonomous systems.

While the Pentagon believes the changes should reassure the public, some said they were “unconvinced” by the efforts, the Daily Mail noted.

Mark Brakel, director of advocacy group Future of Life Institute (FLI), told DailyMail.com in January this year: “These weapons carry a huge risk of inadvertent escalation.”

He explained that AI-powered weapons can misinterpret something, such as a ray of sunlight, and perceive it as a threat, thus attacking foreign powers without reason and without deliberate hostile “visual noise.”

Brakel said the result could be devastating because “without real human control, AI weapons are like the Norwegian missile incident, close to nuclear armageddon on steroids, and they could increase the risk of incidents in hotspots like Taiwan.” Strait”.

The US Department of Defense is pushing hard to modernize its arsenal with autonomous drones, tanks and other weapons that select and attack targets without human intervention.

[ad_2]

Source link