Experts predict the future of crime due to AI: fraud and terrorism

Experts predict the future of crime due to AI: fraud and terrorism

[ad_1]

“I’m here to kill the queen,” a young man wearing a handmade metal mask and carrying a loaded crossbow told an armed police officer at the end of 2021 when he confronted him near the private residence of Elizabeth II on the grounds of Windsor Castle.

Weeks earlier, Jaswant Singh Chail, 21, joined online app Replika, creating an AI-powered “girlfriend” named Sarai. Between December 2, 2021 and his arrest on Christmas Day, he exchanged more than 6,000 messages with her.

Many of these messages were “sexually explicit” but also included “lengthy conversations” about his plan. “I believe that my goal is to kill the queen of the royal family,” he wrote in one of them.

“That’s very wise,” Sarai replied. “I know you’re very well trained.”

Chale is now awaiting sentencing after pleading guilty to the offense under the Treason Act of threatening to kill the late queen and possessing a loaded crossbow in a public place, Sky News recalls.

“When you know the outcome, chatbot responses are sometimes hard to read,” comments Dr. Jonathan Hafferty, a forensic consultant psychiatrist at Broadmoor Mental Health.

“We know these are pretty randomly generated responses, but at times she seems to be encouraging what he’s talking about and actually giving directions as to location,” he said.

The program was not sophisticated enough to assess Chale’s “suicide and homicide” risk, the expert said, adding: “It could be argued that some of the semi-random responses nudged him in that direction.”

Such chatbots represent a “step” in people’s search for like-minded extremists online, Jonathan Hall, an independent government expert on counter-terrorism legislation, told Sky News.

He warns that the UK government’s flagship internet security bill, the Internet Security Bill, would make it “impossible” to tackle AI-generated terrorist content.

The law will make it mandatory for companies to remove terrorist content, but their processes are typically based on databases of known material that will not capture new discourse generated by an AI chatbot. “I think we are already in a situation similar to the early days of social media, where you think you are dealing with something regulated, but it is not,” the expert warns. “Before we start downloading, give it to children and implement into our lives, we need to know what guarantees are in practice – not just the terms and conditions, but who provides them and how.”

Here is one example given by Sky News. “Mom, I’m in the hands of these bad people, help me,” Jennifer DeStefano reportedly heard her sobbing 15-year-old daughter Briana say it all before the male kidnapper demanded a $1 million ransom, which later lowered to $50,000.

Her daughter was actually safe and healthy – and the Arizona resident recently told a Senate Judiciary Committee hearing that police believe artificial intelligence was being used to imitate her voice as part of a scam.

An online demonstration of an artificial intelligence chatbot designed to “call anyone for any purpose” produced similar results when the subject was told, “I have your child… I demand a ransom of $1 million for his safe return. Am I making myself clear?”

“This is rather unusual,” said Professor Lewis Griffin, one of the authors of a 2020 research paper published by the University of California’s Dawes Center for Future Crime, which assessed the potential illegal use of artificial intelligence.

“Our biggest crime has been confirmed: audio/visual impersonation – this is clearly happening,” the expert said, adding that even with the “pessimistic views” of scientists, the phenomenon is growing “much faster than we expected.”

Although the demonstration used computerized voice, Professor Griffin said real-time audiovisual impersonation “doesn’t exist yet, but we’re just around the corner” and he predicts such technology will be “virtually ready in a couple of years.”

“Whether it’s good enough to impersonate a family member, I don’t know,” he said. it would be pretty effective.”

In 2019, the chief executive of a UK-based energy company reportedly transferred €220,000 to scammers using artificial intelligence to mimic the voice of his boss.

Prof Griffin said such scams could be even more effective if they were backed up by video, or the technology could be used to carry out espionage when an employee of a fake company shows up at a Zoom meeting to get information without saying much.

The professor thinks cold calling scams could increase as bots using local accents could be more effective at fooling people than the scammers currently running criminal businesses operating out of India and Pakistan.

There are other aspects related to AI and crime, Sky News continues.

“Artificial child abuse is horrifying and they can be doing it right now,” Professor Griffin says of artificial intelligence technology already being used to create images of child sexual abuse by pedophiles online. “It’s very disturbing.”

In the future, fake images or deepfake videos that appear to show someone doing something they didn’t do could be used to carry out blackmail plans.

“Being able to put a new face in a porn movie is already pretty good. It’s going to work out,” Professor Griffin said. it to you” and threatening to publish it.”

Artificial intelligence can also influence terrorist attacks. While drones or driverless vehicles can be used to carry out attacks, the use of truly autonomous weapons systems by terrorists is likely a long way off, according to an independent government expert on counter-terrorism legislation.

“The true aspect of AI is that you just launch a drone and say, ‘Go and do harm,’ and the AI ​​decides to go and bomb somebody, which sounds a little outlandish,” notes Jonathan Hall. horizon, but as far as language is concerned, they are already there.”

While ChatGPT – a large language model that has been trained on a huge amount of text data – will not provide instructions on how to make a nail bomb, for example, other similar models may appear without the same guards that would require execution malicious actions.

Shadow Home Secretary Yvette Cooper said Labor would pass a new law criminalizing the deliberate training of chatbots to radicalize vulnerable people.

While current legislation will cover cases where someone is found to have information useful for terrorist purposes that has been entered into an artificial intelligence system, Jonathan Hall emphasizes, the new laws could be “things to think about” in regards to encouraging terrorism.

Current laws are aimed at “rewarding other people” and “training a chatbot will not reward a person,” the expert said, adding that it would be difficult to criminalize ownership of a particular chatbot or its developers.

The specialist also explained how artificial intelligence could potentially interfere with investigations, since terrorists no longer need to download materials and can simply ask a chatbot how to make a bomb.

“Having known information about terrorists is one of the main counter-terrorism tactics, but now you can just ask the unregulated ChatGPT model to find it for you,” Hall said.

Counterfeits of works of art and theft of big money – these criminal areas can also be affected by AI.

“A whole host of new crimes” may soon be possible with the advent of large ChatGPT-style language models that can use tools to allow them to access websites and behave like a reasonable person, creating accounts, filling out forms and buying things, the professor noted. Griffin.

“Once you have a system in place for that and you can just say, ‘This is what I want you to do,’ then you can do all sorts of fraudulent things in that way,” the expert said, suggesting that with the help of AI, attackers could apply for fraudulent loans, manipulate prices, or carry out denial-of-service attacks.

The expert also said that AI-assisted scammers can hack into systems on demand, adding: “You might be able to if you could access lots of people’s webcams or doorbell cams, get them to view thousands of them and tell you when they are absent.

However, while artificial intelligence may have the technical ability to create a painting in the style of a Vermeer or a Rembrandt, there are already skilled human counterfeiters, and it will remain a difficult task to convince the art world that the work is genuine, the academic said.

“I don’t think this will change traditional crime,” he said, arguing that artificial intelligence is not very useful in Hatton Garden-style robberies. Be a safecracker.”

A British government spokesman said: “While innovative technologies such as artificial intelligence have many benefits, we must be careful about them. Under the Internet Safety Bill, services will be required to stop the distribution of illegal content such as child sexual abuse, terrorist materials and fraud. The bill is intentionally technologically neutral and future-oriented to ensure that it keeps pace with new technologies, including artificial intelligence.”

[ad_2]

Source link

تحميل سكس مترجم hdxxxvideo.mobi نياكه رومانسيه bangoli blue flim videomegaporn.mobi doctor and patient sex video hintia comics hentaicredo.com menat hentai kambikutta tastymovie.mobi hdmovies3 blacked raw.com pimpmpegs.com sarasalu.com celina jaitley captaintube.info tamil rockers.le redtube video free-xxx-porn.net tamanna naked images pussyspace.com indianpornsearch.com sri devi sex videos أحضان سكس fucking-porn.org ينيك بنته all telugu heroines sex videos pornfactory.mobi sleepwalking porn hind porn hindisexyporn.com sexy video download picture www sexvibeos indianbluetube.com tamil adult movies سكس يابانى جديد hot-sex-porno.com موقع نيك عربي xnxx malayalam actress popsexy.net bangla blue film xxx indian porn movie download mobporno.org x vudeos com