HomeArtificial IntelligenceArtificial Intelligence NewsU.S. Officials concerned about AI's effects on Cybersecurity

U.S. Officials concerned about AI’s effects on Cybersecurity

Senior U.S. cybersecurity experts stated that since there are still many unknowns around artificial intelligence, businesses that deploy new models quickly to keep up with trends run the danger of exposing themselves to those risks.

The U.S. Cybersecurity and Infrastructure Security Agency’s executive assistant director for cybersecurity, Eric Goldstein, said in an interview at the RSA Conference on Wednesday that AI has a wide range of hazards in addition to benefits.

Machine learning and other AI disciplines have made significant progress, enabling analysts to quickly evaluate enormous amounts of data or for algorithms to notify defenders to a breach faster than a human would. However, he added, there is also a chance for harm.

Consider how inherently unhuman AI is when it performs in games like Go and chess. How does an AI red team or adversarial AI appear when it attempts to carry out an intrusion in a manner the defenders have never considered before? Mr. Goldstein made this statement in reference to AI models that might be used to test a company’s cyber defences in a helpful way or that might try to compromise systems maliciously.

Before recommending best practices for the use of AI, the agency is researching both perspectives, he added.

There has been a lot of discussion about how generative AI models like ChatGPT, which can write songs, summarize data, and generate code based on conversational prompts, might be used to enhance cybersecurity. Analysts are also concerned that hackers may use these platforms to produce sophisticated phishing emails or even viruses.

This week, a number of businesses discussed solutions that include generative AI algorithms, also known as large-language models, with their cyber capabilities, including SentinelOne Inc., SecurityScorecard, and the Google Cloud division of Alphabet Inc.

In a meeting with reporters at the conference, Rob Joyce, director of the National Security Agency’s cybersecurity department, said, You’re seeing a massive surge in the community deploying generative models. According to him, the publicity blitz can raise hazards.

According to him, firms should be concerned with preserving their intellectual property as it advances in the realm of artificial intelligence.

According to Mr. Joyce, Chinese hackers have a history of being interested in intellectual property with both military and commercial significance. Beijing consistently denies taking intellectual property.

According to Morgan Adamski, the head of the NSA’s Cybersecurity Collaboration Centre, which shares threat intelligence with businesses in industries like the defense industrial base, the organization is trying to identify the locations of key dependencies for the development of artificial intelligence.

Similar to how the agency evaluates other important technologies like semiconductor and computer-chip production, these include which businesses are at the forefront of AI and what makes up their supply chains.

Without a doubt, Ms. Adamski predicts, there will be an ongoing challenge to AI companies, similar to what we’ve seen with other technologies.

Source link

 

Most Popular