The global leader in cybersecurity and enterprise security solutions, Trend Micro, in collaboration with the United Nations Interregional Crime and Justice Research Institute (UNICRI), Europol’s European Cybercrime Centre (EC3), released “Malicious Uses and Abuses of Artificial Intelligence”, aiming to provide an exhaustive look into the present and possible future malicious uses and abuses of AI and related technologies
The report explored the difference between malicious uses and abuses of AI. It is also framed around two main components- present IA malicious use or abuse and research. The first component thoroughly explores documented and researched cases, while the second component discussed malicious abuse or use that has no evidence or literature yet.
To get more insights on what the malicious abuse of the AI threat landscape might look like in the near future, existing trends found in underground forums were also examined. The study also identified possible countermeasures.
One of the highlights of the report discussed how autonomous cars are at risk of malicious use and abuse, especially in the era of 5G.
Autonomous cars use a wide range of sensors such as cameras to perform ML-guided image recognition of signs and other elements of their surroundings. The ML models also determined the appropriate behavior for the vehicle to take. Autonomous cars are also programmed to obey the laws of the road. According to the report, however, this can be exploited to attack its AI models.
The study illustrated how an autonomous vehicle can be exploited. It also showed how traps can be placed in crime-convenient spots covered by a jammer and give criminals the opportunity to take advantage of the vehicle. Preventing such attacks is quite complex, according to the study. It requires a small number of autonomous vehicles to be fully effective.
“AI-facilitated attacks targeting connected vehicle ecosystems, including telematics, infotainment (such as podcasts), and Vehicle-to-Everything (V2X) communication systems, might result in vehicle immobilization, road accidents, financial losses, and disclosure of sensitive or personal data,” the study explained.
In addition, the study covered the current state of malicious uses and abuses of AI, including AI malware, AI-supported password guessing, and AI-aided encryption and social engineering attacks. It also discussed future scenarios of automated content generation and parsing, AI-aided reconnaissance, and other smart and connected technologies.
To read the full report, click here.
Author: Ericka Pingol