Research
- Over the years, Trend Micro researchers have published articles and research papers that detail different criminal underground communities around the world. Read about their motives, ecosystems, business models, and techniques to anticipate and proactively counter threats before they strike.Our latest research provides a framework for understanding agentic AI systems, outlines their core characteristics, and examines the security implications surrounding their use.Trend Vision One™ tackles 7 of OWASP’s Top 10 LLM vulnerabilities, offering comprehensive protection against prompt injection, data leakage, AI supply chain risks, and other critical flaws.Our research examines how AI coding assistants can hallucinate plausible but non-existent package names—therefore enabling slopsquatting attacks—while also providing practical defense strategies that organizations can implement to secure their development pipelinesTo conclude our series on agentic AI, this article examines emerging vulnerabilities that threaten AI agents, focusing on providing proactive security recommendations on areas such as code execution, data exfiltration, and database access.How can attackers exploit weaknesses in database-enabled AI agents? This research explores how SQL generation vulnerabilities, stored prompt injection, and vector store poisoning can be weaponized by attackers for fraudulent activities.In the third part of our series we demonstrate how risk intensifies in multi-modal AI agents, where hidden instructions embedded within innocuous-looking images or documents can trigger sensitive data exfiltration without any user interaction.Our research examines vulnerabilities that affect Large Language Model (LLM) powered agents with code execution, document upload, and internet access capabilities. This is the second part of a series diving into the critical vulnerabilities in AI agents.