AI
AI
- August 04, 2025As large language models (LLMs) become more capable and widely adopted, the risk of unintended or adversarial outputs grows, especially within a security-sensitive context. To identify and mitigate such risks, Trend Micro researchers ran LLM security scans that simulate adversarial attacks.