HomeArtificial IntelligenceArtificial Intelligence NewsSupreme Court's ruling could impact AI Chatbots

Supreme Court’s ruling could impact AI Chatbots

In the next months, the Supreme Court is anticipated to issue a ruling in a case involving a liability shield for internet and social media businesses that could have an effect on chatbots that use artificial intelligence (AI), such as ChatGPT.

The question in the case is whether tech platforms, such as Google-owned YouTube, violate the liability shield provided by federal law for user-generated, third-party content when they employ algorithms to suggest content to users through platform suggestions or send notifications about such content.

Similar techniques are used by generative AI chatbots to respond to user queries by directing them to content or to gather data from many sources and present it to users as a concise summary. Depending on how the Supreme Court decides to rule on the subject, this might expose the firms who created such chatbots with AI to responsibility.

AI specialist Cameron Kerry, a visiting fellow at the Brookings Institution think tank, told Reuters: The main point of contention is whether the way material is arranged through recommendation engines on the internet has such a big impact on how the content is shaped as to subject it to liability. With regard to a chatbot, you experience the same kinds of problems.

The Communications Decency Act’s Section 230, a landmark statute from 1996 that helped build today’s internet, might be revised or overturned if the Supreme Court rules in the case at hand, even though it has no direct impact on artificial intelligence.

Tech companies, such as Facebook, Google, and Twitter, are generally free from liability for content posted by users on their platforms under Section 230, though they are obligated to remove content that is illegal under federal law, such as material that violates copyright laws or sex trafficking laws.

Justice Neil Gorsuch made the observation during oral arguments that since AI programs are already capable of producing new content and tools to produce “poetry” and “polemics” that go “beyond picking, choosing, analyzing, or digesting content,” such content would not be covered by Section 230.

The “neutral tools test,” which gauges whether a tool like a search engine only filters information by user-generated criteria, was devised by the Ninth Circuit to determine whether the content is protected under Section 230. Gorsuch described this criterion.

In response to a query, one of the attorneys said the court might rule that the Neutral Tools test used by the Ninth Circuit was incorrect because, in some cases, even neutral tools, such as algorithms, can produce content using artificial intelligence. He claimed the Ninth Circuit wasn’t sensitive to this possibility.

AI chatbots frequently paraphrase already published information when responding to users, which would be protected under Section 230. However, there have been instances where AI chatbots have provided users with responses that seem to have been created by the bot itself.. In these instances, Section 230 protections would not apply because the responses are original content. Depending on the type of content the AI chatbot produces, this could make the developer of that content liable in court.

According to Hany Farid, a professor at the University of California, Berkeley, AI developers should be held accountable for the flaws in the models they created, tested, and used. Farid continued, Companies produce safer products when they are held accountable in civil litigation for harms from the products they produce. Additionally, they make less safe items when they are not held accountable.

The family of Nohemi Gonzalez, who was killed at the age of 23 in an ISIS terror act in Paris more than seven years ago, sued Google, and the court heard arguments in Gonzalez v. Google in February.

According to the lawsuit, YouTube, a Google subsidiary, utilized an algorithm that recommended violent videos based on viewers’ interests and alerted users to the contentious videos, which it claims assisted ISIS in recruiting new members and inciting violence. Gonzalez’s family asks that Section 230 of the Communications Decency Act be used to hold Google accountable for the dissemination of such content.

The issue is anticipated to be decided by the Supreme Court before the end of its current term, which is anticipated to finish in late June or early July.

Source link

Most Popular