HomeArtificial IntelligenceArtificial Intelligence NewsDevelopers warn that human extinction due to AI is a possibility

Developers warn that human extinction due to AI is a possibility

Now, “I am become Death, the destroyer of worlds,” declared Dr. J. Robert Oppenheimer after first successful atomic bomb test, recalling a passage from Hindu scripture. And he continued, “I suppose we all thought that, in one way or another.”

Now, top AI researchers and developers appear to be expressing a similar viewpoint.

May 30 saw the publishing of a one-sentence statement from the Centre for AI Safety that compared the dangers of AI to nuclear conflict and was signed by hundreds of AI professionals and engineers, including Sam Altman of OpenAI.

Along with other societal-scale hazards like pandemics and nuclear war, reducing the risk of extinction from AI should be a top concern on a global scale, the statement says.

While some may think that it is a good idea to draw attention to this serious issue, other experts believe that doing so merely allows AI developers to hide behind it while ignoring the actual, albeit much less serious, concerns that are already being posed by current AI models.

How Does This Threat of Extinction Appear?

Nobody has a concrete idea of what these grave dangers posed by AI might entail. However, modern AI models like ChatGPT do not carry these concerns. They are brought up by a hypothetical AI system called artificial general intelligence, or AGI, which is a system that is thought to be smarter than humans.

The risk is a future possibility, which has been repeated endlessly. In its current state, ChatGPT presents a wide range of dramatic, yet potentially manageable risks, including online fraud, mass propaganda, and misinformation.

The main issue, if AGI were to be achieved, would be how to manage something that is smarter than humans.

The ways that someone far smarter than you might outsmart you are among the hardest things to predict, computer scientist Paul Graham once tweeted. That is why he is concerned about AI. Not simply the problem itself, but also the meta-problem that it is impossible to foretell how the problem will turn out.

The idea is that an out-of-control AI model may make a lot of science fiction very true, in ways no one now understands. It is this fear of this lack of control that underlies such apocalyptic worries about AI.

For many experts, it basically boils down to becoming ready for systems that do not currently exist but perhaps will in the future.

According to Professor John Licato, an expert in artificial intelligence, this technology may be more important than any weapon we have ever created in human history. Because of this, we must unquestionably become ready. That’s one of the few things he can say without hesitation.

There’s Reality, and Then There’s Hype

While many experts appear to concur that approaching this with a keen awareness of the hazards is not a terrible idea, several have disputed the claim.

Dr. Noah Giansiracusa, a machine learning researcher, tweeted that He believes it neatly gives cover to a variety of signatories who want to keep creating, deploying, and making money from AI in destructive or unsafe ways since they can claim that they are doing it in an effort to steer AI clear of the fiction of existential peril.

The phrasing of ‘let’s mitigate risks’ positions AI as an unavoidable reality that we must learn to live with. It’s not, he added. If you truly believe it has the potential to wipe off humanity (I don’t, and I believe that worry is unfounded), then don’t f*cking create it!

According to some experts, such as Dr. Serafim Batzoglou, such assertions amount to “fear-mongering” that might provide existing AI leaders with a massive regulatory moat.

Fear-mongering about AGI is overhyped, poisonous, and likely to lead to regulatory capture by incumbents, slowing or impeding constructive applications of AI across society, including biological science and medicine, he tweeted.

Some, however, like as Demis Hassabis, co-founder and CEO of Google DeepMind, believe that the technology is significant and should be addressed with caution.

As with any disruptive technology, we should adopt the precautionary principle and design and deploy it with extreme caution, he tweeted.

To Regulate, Or Not To Regulate

The declaration is the most recent development in the debate over AI regulation. Just a few weeks ago, OpenAI CEO Sam Altman spoke before a congressional hearing on AI regulation and said that government regulation will be essential to reducing the dangers associated with more potent models.

Later, Microsoft built on a suggestion for industry regulation made by OpenAI.

The AI Act is a piece of proposed legislation that the EU is currently working on; Altman threatened to stop operating in Europe if it became law as-is. Later, he backtracked a bit on that because it doesn’t quite line up with his repeated assertions about the significance of regulation.

However, Professor Gary Marcus, an AI expert, noted that the prevention of AI hazards should be a key concern, but actual extinction is one risk that is still poorly understood, there are many other risks that pose a threat to both our safety and our democracy.

We must take a balanced approach when dealing with a wide range of dangers, both short-term and long-term.

Source link

Most Popular