AI

LLM as a Judge: Evaluating Accuracy in LLM Security Scans

August 04, 2025

As large language models (LLMs) become more capable and widely adopted, the risk of unintended or adversarial outputs grows, especially within a security-sensitive context. To identify and mitigate such risks, Trend Micro researchers ran LLM security scans that simulate adversarial attacks.

Read more