There’s an old saying that technologies are neither good nor bad, it’s how they’re used that matters. If there was ever a tool to prove the point, AI is it. On the one hand, AI enables malicious actors to whip up fresh threats and launch attacks of unprecedented scale. On the other, it gives security teams whole new capabilities to strengthen their cyber defences.
Our 2025 Trend Micro Defenders Survey Report digs into these dual aspects of AI. Analysing more than 3,000 responses from 88 countries, it highlights the AI risks keeping cyber defenders up at night—and where cybersecurity teams see opportunities to enhance their security posture with AI tools.
Here are some of the highlights:
Fakes and fraud seen as top AI risks
More than a quarter (26%) of survey respondents told us defending against fraud and AI-driven impersonation were their biggest priorities when it comes to AI risk. Other big areas of focus are preventing AI application attacks, avoiding data and intellectual property leaks via AI tools, and gaining greater insight into employee use of AI solutions, whether sanctioned apps or ‘shadow AI.’
It’s fair to say all these priorities reflect a general anxiety about organisations’ understanding of—and ability to respond to—AI-based attacks.
The good news is something can be done. Fifteen per cent (15%) of respondents have said they’re already pursuing training and education to raise AI risk awareness. More than 10% have made it a priority to block unsanctioned app use. And 7% are focused on preventing overprivileged access to information.
Many are looking at other tactics and actions to manage AI responsibly and minimise potential risks.
Defenders are striking back
Zero-trust architectures, data security posture management (DPSM), and encryption were all reported as ways organisations are defending themselves against AI-related threats. Proactive testing is unfortunately less common, with just 6% of respondents saying they conduct regular AI audits or engage red teams to ensure their cyber protections are as effective as possible.
That said, it’s a positive sign that growing numbers of organisations seem to be involving their cybersecurity teams earlier in the AI adoption process, with 23% engaging security at the discovery stage and 25% at the pilot stage.
This ‘shift left’ is encouraging, even if there’s more work to be done—which there clearly is, given that 17% involve security only on implementation, when it may be too late, 10% don’t know when security gets involved, and 6% say they’re not involved at all.
The other obvious area where security teams can make gains against AI threats is by adopting AI tools. While that’s starting to happen, a few obstacles need to be cleared away.
Confidence in AI for cybersecurity
Just under 20% of survey respondents said their organisation hasn’t started using AI-based cybersecurity tools yet. About the same proportion said they have some nagging concerns about AI’s accuracy and reliability, and slightly fewer cited privacy risks as a reason they might be holding back.
Certainly, any team hoping to deploy AI defences and then go for an extended coffee break should have misgivings. AI tools need to be monitored and trained. But the gold is in the training, and the sooner they’re put to use and start learning, the more effective they’ll become.
Another key to success is ensuring that the use of AI cybersecurity tools and strategies aligns with the needs of the business. Nineteen percent (19%) of survey respondents said their biggest challenge is identifying relevant and valuable use cases. Doing so requires at least two things: one, having technical and cybersecurity leaders engage in business-focused conversations with executives to uncover where security and business goals overlap; and two, developing a strategic, organisation-wide practice of cyber risk management that can inform where and how AI tools are needed.
AI risk is central to cyber risk management
In case you missed it, our previous blog on the 2025 Trend Micro Defenders Survey Report puts these issues of AI risk into the broader context of overall cyber risk and how organisations are dealing with it. We invite you to check that out and, of course, to download the full report.
This year’s results make it clear that AI is, and will continue to be, a key part of cyber risk management going forward. The question is not, “Is AI our friend or foe?” It’s: “Where are the greatest AI risks we face, how can we respond to them—and how can we use AI to turn the tables on bad actors?”
The answer, as indicated in the quick summary of survey findings here, includes greater awareness and more training, more mature company policies, involving security teams as early as possible in AI adoption, and taking advantage of the new, advanced cybersecurity capabilities AI has to offer.
In our next blog, we’ll shift perspective and take a look at the ways organisations are maturing their approach to cloud risk management. Stay tuned.
Next steps
Learn more about ways to manage cloud risk from these additional resources: