HomeMachine LearningMachine Learning NewsWeaponizing Machine Learning Models With Ransomware

Weaponizing Machine Learning Models With Ransomware

Machine learning models are a new attack vector for software supply chain defenders to be concerned about.

Face recognition and chatbots, for example, are based on machine learning models. The models are frequently downloaded and shared by developers and data scientists, similar to open-source software repositories, so a compromised model could have a disastrous effect on numerous organizations at once.

An attack using a well-known ML model to spread ransomware was described in a blog post on Tuesday by researchers at the machine language security firm HiddenLayer.

The researchers’ technique is comparable to how hackers use steganography to conceal harmful payloads in images. The malicious code in the ML model is concealed in the model’s data.

The researchers claim that the steganography procedure is reasonably common and can be used with most ML libraries. They added that the method could be used to steal data from an organization in addition to inserting malicious code into the model.

Attacks can also be carried out regardless of the operating system. According to the researchers, depending on the platform, the OS and architecture-specific payloads could be embedded in the model and loaded dynamically at runtime.

Flying secretively

Tom Bonner, senior director of adversarial threat research at the Austin, Texas-based HiddenLayer, noted that embedding malware in an ML model has some advantages for an adversary.

According to Bonner, it allows them to fly under the radar. Present antivirus or EDR software does not detect this technique. It also gives them access to new targets. It provides direct access to systems used by data scientists. A machine learning model that is stored in a public repository can be compromised. Data scientists will download it, load it, and then compromise it.

The fact that these models can be downloaded to different machine-learning operations platforms and have access to Amazon S3 buckets and training data can be quite alarming. Bitcoin miners could be very successful on those systems too because the majority of [the] machines running machine-learning models have big, fat GPUs in them, he continued.

First Mover Benefit

According to Chris Clements, vice president of solutions architecture at Scottsdale, threat actors frequently prefer to take advantage of unexpected vulnerabilities in new technologies.

According to him, Attackers looking for a first mover advantage in these frontiers can benefit from both less readiness and proactive protection against utilizing new technologies.

The next move in the cat-and-mouse game between attackers and defenders may be this attack on machine-language models, he suggested.

Threat actors will use any available vectors to carry out their attacks, according to Mike Parkin, senior technical engineer at Tel Aviv, Israel-based Vulcan Cyber, a provider of SaaS for enterprise cyber risk remediation.

According to him, If used carefully, this unusual vector could slip past a number of widely used tools.

According to Morey Haber, chief security officer at BeyondTrust, a manufacturer of privileged account management and vulnerability management solutions in Carlsbad, California, traditional anti-malware and endpoint detection and response solutions are made to detect ransomware based on pattern-based behaviors, including virus signatures and monitoring important API, file, and registry requests on Windows for potential malicious activity.

Traditional attack vectors and even detection methods can be changed to appear non-malicious if machine learning is applied to the distribution of malware like ransomware, Haber told.

Possibility of Significant Damage

Karen Crowley, director of product solutions at Deep Instinct, a deep-learning cybersecurity firm in New York City, observed an increase in attacks on machine-language models.

Although it isn’t yet significant, Crowley warned that there is a chance of widespread damage.

As she put it, In the supply chain, if the data is poisoned so that when the models are trained, the system is also tainted, and that model may be making decisions that weaken rather than strengthen security.

In the instances of Log4j and SolarWinds, she said, they observed the impact not only on the organization that owns the software but on all of its users in that chain.  That damage could quickly multiply once ML is introduced.

Attacks on ML models may be a subset of a larger trend of attacks on software supply chains, according to Casey Ellis, CTO and founder of Bugcrowd, which manages a crowdsourced bug bounty platform.

According to Ellis, adversaries may try to compromise the supply chain of software applications to insert malicious code or vulnerabilities, just as they may target the supply chain of machine learning models to insert false or biased data or algorithms.

The reliability and integrity of AI systems may be significantly impacted by this, and it may also be used to erode confidence in the technology, he warned.

Script Kiddies Pablum

Because machine models are more vulnerable than previously thought, threat actors may be more interested in them.

People have known for some time that this is possible, but Bonner claimed that they were unaware of how simple it is. With a few straightforward scripts, an attack can be put together with relative ease.

It’s within the reach of script kiddies to pull it off now that people are aware of how simple it is, he continued.

Clements concurred that the research has demonstrated that malicious commands can be inserted into training data that can then be triggered by ML models at runtime without the need for extreme ML/AI data science expertise.

But it does require more sophistication than standard ransomware attacks, which typically rely on easy credential stuffing or phishing to launch, he continued.

He stated, At this time, he believes the popularity of the specific attack vector is likely to be low for the foreseeable future.

In order to exploit this, an attacker must compromise an upstream ML model project used by downstream developers, trick the victim into downloading a pre-trained ML model with malicious commands embedded from an unofficial source, or compromise the private dataset used by ML developers to insert the exploits,” he explained.

In addition to adding obfuscated exploits to training data, he continued, it seems like there would be much simpler and direct ways to compromise the target in each of these scenarios.

Source link

Most Popular