New Research Reveals How Adversarial Attacks Can Subvert Machine Learning Systems
A research paper published in the journal Science warns of the prospect of advanced techniques being used to throw machine learning (ML) systems off. The research details how adversarial attacks — techniques designed to subvert ML systems — can be crafted and deployed in the healthcare industry, where there has been an increasing use of ML and artificial intelligence (AI) technologies. In a demonstration, a small number of pixels in an image of a benign skin lesion was altered, tricking a diagnostic AI system to identify it as malignant.
The likelihood of ML and AI systems being compromised when subjected to adversarial attacks isn’t limited to those used in the healthcare industry. For example, systems that are supposed to help protect enterprise resources and data — specifically, ML systems used for security — can also become vulnerable to such attacks.
[Read: Clustering Malicious Network Flows With Machine Learning]
Adversarial Attacks on ML Systems Used for Security
A recently published article on Dark Reading laid out the attack methods that cybercriminals can use to topple enterprises’ security defenses. For example, Deep Exploit, an automated penetration testing tool that uses ML, can be used by attackers to pen test organizations and find security holes in their defenses in just 20 to 30 seconds. Such a speed level is achieved through ML models that quickly ingest and analyze data and therefore also produce results optimized for the next attack stage.
One of the ways cybercriminals can also carry out adversarial attacks is by injecting corrupt data into enterprises’ unique computational algorithms and statistical models to confuse their ML models.
In the same vein, cybercriminals can poison ML training sets by injecting malware samples similar to benign files. As a result, the training set will become prone to false positives. Case in point: the PTCH_NOPLE malware from a patch family that modifies dnsapi.dll, a Windows file. Some ML systems experienced higher false positive rates because of benign dnsapi.dll files that were infected with PTCH_NOPLE.
ML models of cybersecurity products can also be evaded through the use of an infected benign Portable Executable (PE) file or a benign source code compiled with malicious code. By doing this, malware developers can make a malware sample appear benign to an ML system and prevent its detection as malicious, since its structure still comprises mostly that of the original benign file.
[Read: Using Machine Learning to Detect Malware Outbreaks With Limited Samples]
Countermeasures to Adopt to Protect ML Security Systems
ML systems used in security solutions should be protected from adversarial attacks or evasion methods to give way to wider threat coverage and lower false positive rates. Some mitigation techniques:
- To reduce the attack surface of the ML system, a defense should be set up at the infrastructure level. A free tool with a local ML model for trial use can be used by a cybercriminal to modify samples in order to probe an ML system. To make the ML system less susceptible to probing, cloud-based solutions, such as products with Trend Micro™ XGen™ security, can be used to detect and block malicious probing. If a probe attempt takes place, the solution will show fake results to the attacker or it can terminate the product or service associated with the account the attacker is using.
- An ML system should be made more robust in order to withstand adversarial attacks. To do this, first, potential security holes should be identified early on in its design phase and every parameter should be made accurate. Second, the ML model should be retrained by generating adversarial samples. This process should be implemented continuously throughout the ML system’s lifecycle.
- To reduce false positives, security solutions that employ ML for both detection and whitelisting should be used. Trend Micro XGen security uses the Trend Micro Locality Sensitive Hash (TLSH) — a method that generates a hash value that can then be analyzed for similarities.
Adversarial attacks can be warded off when an ML system is enhanced. But while the result of a robust ML system is improved detection and block rates, it isn’t the only technology that solves all security problems. A variety of security technologies should interoperate with it in order to create a multilayered defense — which is still the most effective at deflecting varying kinds of threats.
Like it? Add this infographic to your site:
1. Click on the box below. 2. Press Ctrl+A to select all. 3. Press Ctrl+C to copy. 4. Paste the code into your page (Ctrl+V).
Image will appear the same size as you see above.
Recent Posts
- Unleashing Chaos: Real World Threats Hidden in the DevOps Minefield
- From Vulnerable to Resilient: Cutting Ransomware Risk with Proactive Attack Surface Management
- AI Assistants in the Future: Security Concerns and Risk Management
- Silent Sabotage: Weaponizing AI Models in Exposed Containers
- AI vs AI: DeepFakes and eKYC