Since as far back as 2022, global security leaders have been telling us that their digital attack surface is spiralling out of control. If anything, these concerns are even more pressing today, as more organisations turn to AI apps and large language models (LLMs) to transform business operations. While AI can be transformative for such businesses, it also presents an attractive target for threat actors.
That’s why Trend Micro is leading from the front in the global push to secure AI—with a deep dive research to share with the community and AI-powered cybersecurity platform of our own to enhance Cyber Risk Exposure Management (CREM). Our latest effort is a new case study submission to the world-renowned MITRE ATLAS framework, which will hopefully help countless organisations to improve their cyber resilience.
Trend Micro’s case study, AML.CS0028, marks a major milestone. It is the first ATLAS study to document a cloud and container-based attack path against AI infrastructure, offering defenders a critical new playbook to improve their incident preparedness. Only 31 case studies have been accepted into MITRE ATLAS since 2020, making this contribution both rare and highly impactful.
Based on a real-world scenario, the case study chronicles how a supply chain compromise could poison an AI model’s development pipeline—mapping each stage of the attack to ATLAS techniques. The research also uncovered systemic risks in cloud environments that support AI models, giving defenders concrete insights to better protect emerging AI systems.
AI threats on the rise
According to one estimate, generative AI (GenAI) could add the equivalent of $2.6-4.4 trillion annually to the global economy. But as more organisations build out AI infrastructure and embed the technology into more business-critical processes, they could also be exposed to the risk of sensitive data compromise, extortion, and sabotage in new ways.
We’ve highlighted this in the past, noting countless vulnerabilities and misconfigurations in AI components like vector stores, LLM-hosting platforms and other open source software. Among other things, organisations fear that threat actors could steal training data for profit, poison it to compromise an LLMs output and integrity, or steal the models themselves.
In developing AML.CS0028, we uncovered disturbing trends:
- Over 8,000 exposed container registries were found online—double the number observed in 2023.
- 70% of these registries allowed push (write) permissions, meaning attackers could inject malicious AI models.
- Within these registries, 1,453 AI models were identified, many in Open Neural Network Exchange (ONNX) format, with vulnerabilities that could be exploited.
This sharp growth reflects a broader trend: attackers are increasingly targeting the underlying infrastructure supporting AI, not just the AI models themselves.
Turning research into action
Fortunately, Trend Micro’s global team of forward-looking threat researchers is always on the hunt for new threat actor tactics, techniques, and procedures (TTPs) to leverage. The more we know, the more we can help network defenders enhance cyber resilience and improve their detection, protection and response efforts.
We’ve submitted our latest discovery to MITRE ATLAS. The case study (AML.CS0028) is based on a real-world data poisoning attack against a container-hosted AI model in the cloud. As part of our research, we discovered over 8,000 exposed container registries, 70% of which allowed write access, and 1,453 AI models that could also have been exploited.
This is the first ATLAS case study to involve both cloud and container infrastructure in a sophisticated supply chain compromise. Only 31 studies have been accepted by the non-profit since 2020, so we’re thrilled to be making a positive contribution to the security community with this submission.
Fighting the good fight together
As one would expect from Trend’s star team of expert researchers, this case study stands out from the crowd in both scope and technical depth. We’re confident that its publication in MITRE ATLAS will help make the digital world safer, for several reasons:
- The study is encoded in ATLAS YAML, allowing easy integration into tools already aligned with MITRE ATT&CK.
- It provides a reproducible scenario that defenders can simulate to improve threat detection and incident response planning.
- It contributes to MITRE’s Secure AI initiative, encouraging others to share anonymised incidents and help grow a collective understanding of AI threats.
At Trend Micro, we never forget that cybersecurity is a team sport. That’s why our threat research and product development efforts are leveraged not just to protect our customers, but all technology users. It’s the same philosophy that spurred us to create a specialised Pwn2Own AI competition later this year, which will help to surface new vulnerabilities in some of the world’s most popular AI components.
With MITRE ATLAS, we have another way to make a positive impact on the global cybersecurity landscape.