- Noticias de seguridad
- Security Technology
- New Research Reveals How Adversarial Attacks Can Subvert Machine Learning Systems
A research paper published in the journal Science warns of the prospect of advanced techniques being used to throw machine learning (ML) systems off. The research details how adversarial attacks — techniques designed to subvert ML systems — can be crafted and deployed in the healthcare industry, where there has been an increasing use of ML and artificial intelligence (AI) technologies. In a demonstration, a small number of pixels in an image of a benign skin lesion was altered, tricking a diagnostic AI system to identify it as malignant.
The likelihood of ML and AI systems being compromised when subjected to adversarial attacks isn’t limited to those used in the healthcare industry. For example, systems that are supposed to help protect enterprise resources and data — specifically, ML systems used for security — can also become vulnerable to such attacks.
[Read: Clustering Malicious Network Flows With Machine Learning]
A recently published article on Dark Reading laid out the attack methods that cybercriminals can use to topple enterprises’ security defenses. For example, Deep Exploit, an automated penetration testing tool that uses ML, can be used by attackers to pen test organizations and find security holes in their defenses in just 20 to 30 seconds. Such a speed level is achieved through ML models that quickly ingest and analyze data and therefore also produce results optimized for the next attack stage.
One of the ways cybercriminals can also carry out adversarial attacks is by injecting corrupt data into enterprises’ unique computational algorithms and statistical models to confuse their ML models.
In the same vein, cybercriminals can poison ML training sets by injecting malware samples similar to benign files. As a result, the training set will become prone to false positives. Case in point: the PTCH_NOPLE malware from a patch family that modifies dnsapi.dll, a Windows file. Some ML systems experienced higher false positive rates because of benign dnsapi.dll files that were infected with PTCH_NOPLE.
ML models of cybersecurity products can also be evaded through the use of an infected benign Portable Executable (PE) file or a benign source code compiled with malicious code. By doing this, malware developers can make a malware sample appear benign to an ML system and prevent its detection as malicious, since its structure still comprises mostly that of the original benign file.
[Read: Using Machine Learning to Detect Malware Outbreaks With Limited Samples]
ML systems used in security solutions should be protected from adversarial attacks or evasion methods to give way to wider threat coverage and lower false positive rates. Some mitigation techniques:
Adversarial attacks can be warded off when an ML system is enhanced. But while the result of a robust ML system is improved detection and block rates, it isn’t the only technology that solves all security problems. A variety of security technologies should interoperate with it in order to create a multilayered defense — which is still the most effective at deflecting varying kinds of threats.
Like it? Add this infographic to your site:
1. Click on the box below. 2. Press Ctrl+A to select all. 3. Press Ctrl+C to copy. 4. Paste the code into your page (Ctrl+V).
Image will appear the same size as you see above.