HomeArtificial IntelligenceArtificial Intelligence NewsUK’s plans for AI safety draw criticism

UK’s plans for AI safety draw criticism

Rishi Sunak, the prime minister of the United Kingdom, declared on Wednesday that his country will hold the first major global summit on AI safety this autumn. In order to assess and keep track of the hazards posed by artificial intelligence, it aims to bring together key countries, top tech companies, and researchers.

Concerns regarding proper government oversight have grown over the past year as a result of the machine learning industry’s allegedly rapid technological advancement. Some AI specialists recently compared the possible dangers posed by AI to those posed by pandemics or nuclear weapons, which heightened these concerns. Additionally, “AI” has recently become a very popular phrase in business. In that vein, the UK government wishes to intervene and exercise leadership in the area.

The UK government stated in a press release that advances in AI continue to improve our lives—from enabling paralysed people to walk to discovering antibiotics that kill superbugs. However, the development of AI is progressing extremely quickly, and this rate of change calls for flexible leadership. In order to guarantee that this technology is developed and implemented safely and ethically, the UK is acting in this manner.

The summit’s date, structure, or location were not made clear in the vague news release.

Sunak recently spoke with a variety of business executives in the run-up to the next conference, including the CEOs of AI labs like OpenAI, DeepMind, and Anthropic. The upcoming conference is expected to expand on recent conversations concerning AI safety held at the G7, OECD, and Global Partnership on AI, according to the UK. In addition, a UN Security Council briefing on the effects of AI on global peace and security is scheduled for July.

In the news announcement, Sunak stated that no one nation can do this alone. A global effort will be required for this. However, the UK will lead the way with the help of its allies thanks to our extensive experience and dedication to an open, democratic international order.

Who will be invited?

The UK government did not formally disclose a list of invitees in its summit announcement. OpenAI, DeepMind, Anthropic, Palantir, Microsoft, and Faculty are just a few examples of machine learning-related organizations with UK operations that are excitedly discussed in the press release. Additionally, it includes quotations from business leaders from these firms, including Alexander Karp, co-founder and CEO of Palantir:

They are pleased to continue their business relationship with the UK, where they have roughly one-fourth of their total staff. London is the obvious choice as the hub of the European efforts to produce the most efficient and moral artificial intelligence software solutions because it attracts the top software engineering talent in the world.

Some people have already voiced criticism of this list of probable summit participants. This press release for the UK AI Safety Summit features DeepMind, Anthropic, Palantir, Microsoft, and Faculty but does not include a single voice from civil society or academia or anyone with firsthand knowledge of algorithmic harms, according to Rachel Coldicutt, the director of the London-based equity and social justice research consultancy Careful Industries.

Of the companies mentioned, Palantir has drawn the most attention because of its tight ties to the military and defense industries, which have sparked concerns about the possible abuse of AI. The use of the company’s technologies in law enforcement and surveillance has raised questions about privacy and civil liberties.

Similar worries were raised by Hugging Face’s Dr. Sasha Luccioni on Twitter: The ‘first major global meeting on AI safety’, which included corporations like Palantir (military/defense) and DeepMind (X-risk/AGI), looked highly promising for tackling the actual and present concerns of AI (such its application in warfare). The apparent discrepancy between the summit’s stated safety aims and the actions of some of the companies praised in the announcement is highlighted by her sarcasm.

After receiving multiple letters of caution about the dangers of AI this year from prominent figures in the IT industry, “AI safety” has gained a lot of attention. There is significant discussion about whether machine learning systems pose an existential threat to humans. Simultaneously, proponents of AI ethics like Luccioni, who had previously spoke with on this subject, believe that dangerous AI applications now in use are not receiving enough attention.

While the UK government emphasizes well-known talking points about AI risk in its news release, these criticisms from proponents of AI ethics indicate a growing demand for more thorough and varied perspectives at events that aim to create government regulation over AI. The UK-hosted event will undoubtedly be eagerly watched by the international AI community to see if it can lead to an inclusive and fruitful discussion about the dangers of AI that goes beyond the typical photo opportunities with the new titans of industry.

Source link

Most Popular