AI

LLM as a Judge: Evaluating Accuracy in LLM Security Scans

04 de agosto de 2025

As large language models (LLMs) become more capable and widely adopted, the risk of unintended or adversarial outputs grows, especially within a security-sensitive context. To identify and mitigate such risks, Trend Micro researchers ran LLM security scans that simulate adversarial attacks.

Leer más