HomeArtificial IntelligenceArtificial Intelligence NewsAre Americans Are Ready for AI Regulation ?

Are Americans Are Ready for AI Regulation ?

Some of the most convincing evidence yet demonstrating the public’s desire to regulate AI comes from a recent poll of more than 1,200 registered voters.

In a recent study, 54% of US citizens who are registered to vote agreed that Congress should “take swift action to regulate AI” to advance privacy and safety and guarantee the technology brings “the maximum benefit to society.” Republican and Democratic support for limiting AI was almost identical, a rare show of cooperation that suggests a growing consensus over the quickly developing technology.

Only 20% of voters thought that tech companies should regulate themselves, while 41% preferred to see that regulation come from government action. Voters surveyed also didn’t appear to agree with tech leaders who claim new AI regulations could harm the US economy. Only 15% of those polled claimed that restricting AI would stifle innovation.

A majority of Americans do not trust Big Tech to prioritize safety and regulate artificial intelligence, and by a two-to-one margin want Congress to act, which is deeply telling given that the new technology is advancing quickly and the public’s awareness of it. Deputy Executive Director of the Tech Oversight Project Kyle Morris told Gizmodo.

The survey declines at what might end up being a turning point in government AI policy. The Biden Administration met with the CEOs of four top AI businesses a few hours before the poll’s publication to discuss AI threats. The administration also disclosed that seven new National AI Research Institutes would be established with financing totaling $140 million from The National Science Foundation.

Recent pushback against AI

Even without polls, there are some definite indications that the national debate about AI has changed from mild amusement and excitement over AI generators and chatbots to serious downsides. But depending on whom you ask, the precise nature of those harms varies greatly. More than 500 corporate executives and tech professionals signed an open letter last month urging AI labs to immediately halt work on any new large language models more potent than OpenAI’s GPT-4 due to worries that they could pose “profound risks to society and humanity.” The signatories, which included Elon Musk, co-founder of OpenAI, and Steve Wozniak, co-founder of Apple, said they’d accept a government-mandated freeze on the technology if businesses didn’t want to cooperate.

Other top experts in the field, including Emily M. Bende, a professor of linguistics at the University of Washington, and Sarah Myers West, managing director of the AI Now Institute, agree that regulation of AI is needed but object to the growing tendency to attribute human-like traits to machines that are essentially engaged in word association games or highly advanced games. It doesn’t matter because AI systems aren’t sentient or human, the researchers previously told Gizmodo. They worry that the inclination of technology to invent information and portray it as fact will result in an influx of false information that will make it increasingly harder to distinguish between what is true and what is not.

According to them, adverse effects could be considerably severe for marginalised groups due to the technology’s built-in biases from discriminating databases. While this was going on, conservatives, who were concerned about “woke” biases in chatbot results, praised Musk for developing his own politically incorrect “BasedAI.”

In the absence of state involvement, West told Gizmodo, “we’re facing a world where the trajectory for AI will be unaccountable to the public and determined by the small number of companies that have the resources to develop these tools and experiment with them in the wild.”

A new wave of AI bills are on the way

Congress, a group of lawmakers that isn’t known for keeping up with new technology, is frantically trying to accelerate the speed of AI tech policy. A measure requesting the creation of a “AI Task Force” to explore potential civil liberties concerns brought on by AI and provide suggestions was submitted last week by Colorado Sen. Michael Bennet. A few days before to that, California Representative Ted Lieu and Massachusetts Senator Ed Markey filed a bill of their own to stop AI from controlling nuclear launchers. According to them, this may result in a nuclear apocalypse like that of Hollywood. In an effort to strengthen the accountability and openness of the technology, Senate Majority Leader Chuck Schumer also released his own AI framework.

The Age of AI is now and will remain, according to Schumer. Now is the time to develop, harness, and advance it to benefit our nation for generations,” the president said.

By meeting with executives from four top AI startups this week to address AI safety, the Biden administration this week demonstrated its own interest in the field.

According to senators quoted in a recent Politico piece, a significant portion of that abrupt change is the result of a positive public reaction to ChatGPT and other well-liked new chatbots. The widespread use of the apps and the widespread misunderstanding surrounding their capacity to generate convincing—and occasionally disturbing—responses had allegedly touched a chord in a manner that few other technological challenges had.

According to Frank Lucas, chair of the House Science Committee, “AI is one of those things that kind of moved along at ten miles per hour and suddenly is 100, approaching 500 miles per hour. They are all attempting to concentrate because it has everyone’s attention, stated Lucas.

Source link

Most Popular