HomeArtificial IntelligenceArtificial Intelligence NewsAI poses a threat to National security

AI poses a threat to National security

According to the terror watchdog, the designers of artificial intelligence must reject their “tech utopian” worldview, amid concerns that the new technology could be exploited to groom vulnerable individuals.

The national security threat posed by AI, according to Jonathan Hall KC, whose function it is to examine the adequacy of terrorism legislation, is becoming increasingly clear, and the technology must be created with terrorists’ aims firmly in mind.

He claimed that too much AI development has concentrated on the technology’s potential benefits while ignoring how terrorists might use it to carry out attacks.

They need a nasty little 15-year-old neo-Nazi in the room with them, planning out what they’re going to do. According to Hall, “you have to hardwire the defenses against what you know people will do with it.”

The government’s independent terrorism law reviewer confessed he was becoming increasingly concerned about the potential for artificial intelligence chatbots to persuade vulnerable or neurodivergent persons to carry out terrorist operations.

What concerns him is humans’ susceptibility while immersed in this realm, especially when the computer is off the hook. Language matters in the context of national security because it ultimately persuades people to do things.

The security services are said to be especially concerned about AI chatbots’ propensity to recruit minors, who are already a rising component of MI5’s terror caseload.

The prime minister, Rishi Sunak, is expected to raise the issue when he travels to the US on Wednesday to meet President Biden and senior congressional figures, as calls for regulation of the technology grow following warnings last week from AI pioneers that it could threaten the survival of the human race.

Back in the UK, efforts are intensifying to address the national security issues posed by AI, with a collaboration between MI5 and the Alan Turing Institute, the national organization for data science and artificial intelligence, leading the way.

According to Alexander Blanchard, a digital ethics research fellow at the institute’s military and security department, the institute’s engagement with security agencies demonstrated that the UK was taking the security problems posed by AI very seriously.

There is a strong desire among military and security policymakers to understand what is going on, how actors might use AI, and what the threats are.

There is a genuine desire to stay current. There is work being done to determine what the hazards are, what the long-term risks are, and what the risks are for next-generation technologies.

Sunak stated last week that Britain intended to become a global center for AI and its governance, claiming that it could provide “massive benefits to the economy and society.” According to Blanchard and Hall, the essential question is how humans preserve “cognitive autonomy” – control – over AI and how this control is embedded into the technology.

According to Hall, the possibility of vulnerable persons alone in their bedrooms being quickly trained by AI is becoming more apparent.

Matthew King, 19, was sentenced to life in prison on Friday for setting up a terror attack, with experts praising the speed with which he became radicalized after viewing extremist material online.

Hall believes that tech businesses must learn from the mistakes of the past, citing social media as a crucial channel for exchanging terrorist content in the past.

Greater transparency from the companies behind AI technology was also required, according to Hall, particularly about the number of staff and moderators employed.

He stated, “We need absolute clarity about how many people are working on these things and their moderation.” How many people are truly involved when they declare they’ve put up guardrails? Who is monitoring the guardrails? How much time do you think a two-man corporation devotes to public safety? Probably nothing or very little.

New legislation to combat the terrorism threat posed by AI may also be required, according to Hall, to address the growing threat of lethal autonomous weapons – devices that utilize AI to select their targets.

You’re talking about a type of terrorist that wants deniability, who wants to be able to ‘fly and forget,’ according to Hall. They can really launch a drone into the air and take off. Nobody knows what the artificial intelligence will decide. It may simply dive-bomb a crowd, for example. Do our criminal laws cover this type of behavior? In general, terrorism is about intent; human intent rather than computer intent.

Lethal autonomous weaponry – or “loitering munitions” – has already been observed on Ukrainian battlefields, sparking moral concerns about the consequences of the airborne killing machine.

Blanchard claims that AI can learn and adapt by interacting with its surroundings and improving its behavior.

Source link

Most Popular