AI is fundamentally transforming the modern world. It offers previously out-of-reach opportunities for business leaders to anticipate market trends and make better decisions. For organisations to intelligently automate mundane processes and free talent to work on higher value work. And for companies to reach customers in highly personalised ways, with innovative new products and services.
It is also helping network defenders to get on the front foot against their adversaries in new ways—by seeing more and acting faster to neutralise threats and fill security gaps before the can be exploited.
But with opportunity comes risk.
AI has become a significant target in its own right. And the technology is also being used in a growing number of use cases to empower threat actors to launch more sophisticated attacks at scale.
To find out more, we commissioned Sapio Research to interview 2,250 global IT and/or cybersecurity decision makers, in organisations of various sizes and across multiple verticals. We found that although most are already using AI tools for cyber, and many more plan to do so, a majority are also concerned about the impact the technology will have on their attack surface. Many more worry about AI-powered cyber-attacks.
AI as a business enabler
When done right, cybersecurity isn’t a siloed cost centre, or a block on innovation and growth, as it’s often seen by business leaders. On the contrary, it can be a powerful business enabler. A mature cybersecurity posture could help an organisation to:
- Build customer trust and drive competitive differentiation
- Provide the foundations on which successful digital transformation initiatives can be built
- Enable flexible working, which in turn can empower staff to be more productive, while improving work-life balance for many
- Support expansion into new markets, if local laws and regulations require enhanced levels of cybersecurity
By the same rationale, AI-powered cybersecurity could supercharge these benefits. That’s certainly the impression our respondents gave. In fact, 81% are already using AI-driven tools as part of their cybersecurity strategy, with a further 16% exploring options. Additionally, over two-fifths (42%) say implementing automation or AI-driven tools is a top priority for improving cybersecurity in the next 12 months.
Over half (52%) say they’re happy to use AI for essential day-to-day security-related processes like automated asset discovery, risk prioritisation and anomaly detection. That’s just the tip of the iceberg. AI offers a wealth of capabilities that can help to improve:
Data protection: AI can be used to discover, classify and encrypt sensitive information, as well as monitor access to data stores and flag immediately if they have been breached.
Endpoint security: AI can be a key ingredient in endpoint detection and response (EDR)—analysing behavioural data and context to detect and block suspicious activity, malware and other threats.
Cloud security: AI algorithms can do the same for cloud environments, monitoring for unusual activity which deviates from a “learnt” baseline and alerting security teams.
Advanced threat hunting: By trawling through vast quantities of network data, AI tools can spot threat actors before they have time to cause lasting damage.
Identity and access management (IAM): AI can make IAM more intelligent, creating unique behavioural profiles for individuals based on various aspects such as keystrokes and mouse movements. It supports continuous authentication for enhanced security and zero trust operations.
The impact on the attack surface
However, as optimistic as IT and security leaders are about the potential for AI to transform cybersecurity, they are also concerned that the technology may open them up to new risks. Nearly all (94%) respondents told us they think AI will have a negative impact on attack surface management (ASM) in the next 3-5 years.
The size of the corporate cyber-attack surface has long been a concern for IT security leaders, who have seen digital investments outpace their ability to mitigate escalating risk. Now they are worried that a new fleet of AI tools may make this job even harder. Their concerns include:
- Sensitive data exposure
- A lack of transparency around data processing/storage
- Exploitation of proprietary data by untrusted AI models
- Compliance challenges
- More endpoints and APIs to monitor
- Shadow AI or Unsanctioned AI
This is not an exhaustive list. In fact, OWASP has a whole Top 10 devoted to Large Language Model (LLM) risks. The National Cyber Security Centre (NCSC) recently warned that such models could be especially vulnerable to attack if developers rush them to market without adding adequate security provisions. Among the most commonly cited threats are prompt injection, supply chain attacks and data poisoning. They could lead to sensitive data theft, and manipulation of models to produce unintended outputs—potentially sabotaging operations, or enabling wider system access.
AI looms large over the threat landscape
AI represents a multi-sided threat to global organisations. It’s not just about the risks posed to their attack surface from AI systems themselves, but also potential AI-powered attacks. Over half (53%) of respondents believe that the complexity and scale of these attacks will drastically increase in the future, requiring a new approach to cyber risk management.
It’s a threat flagged by the NCSC, which has warned that the coming two years could see:
- An increase in the “frequency and intensity” of cyber threats, including reconnaissance, vulnerability research and exploit development (VRED), social engineering, basic malware generation, and data exfiltration
- More threat actors using AI-as-a-service offerings
- More automation in various parts of the cyber-attack chain
- AI used to develop zero-day exploits
Assurances and next steps
Some 44% of respondents say they need to understand more about the technology before they consider using AI-powered security tools. That’s understandable given their concerns about AI expanding the attack surface. Nearly half (46%) currently manage their attack surface risks by regularly assessing and monitoring third party vendors for vulnerabilities; conducting thorough security assessments. They will surely want to expand these cheques to AI security vendors before adopting the technology.
Other steps to consider in order to manage risk across the AI attack surface could include:
Developing a comprehensive AI security strategy incorporating advanced threat modelling, threat hunting, AI-based risk assessments, AI security controls, and detailed incident response plans.
Ensuring the quality, integrity, and reliability of AI training data to ensure AI models are as accurate and effective as possible, and to address concerns of bias.
Implementing industry-standard AI security frameworks and best practises like those from NIST, MITRE, OWASP, Google and ISO.
Integrating AI security with existing security and cybersecurity processes, for seamless end-to-end protection across all environments.
Conducting regular employee training and awareness programmes to create an AI security-aware culture.
Continuously monitoring, assessing, and updating AI models to check for and remediate vulnerabilities, and improve accuracy, performance and reliability.
More generally, organisations should consider updating security strategy to account for the elevated threat from AI-powered attacks. AI security tools can help by:
- Analysing large volumes of data to detect anomalies in realtime
- Scanning for vulnerabilities and misconfigurations and other security gaps
- Identifying/mitigating cyber-attacks in real time
- Automating threat detection and response tools to free up stretched security teams
- Leveraging the latest threat intelligence to stay one step ahead
- Closing security skills gaps by assisting security analysts
The opportunity from AI security, as for AI in general, is too great to ignore. But only by assessing and then taking steps to continually manage associated risks can organisations truly hope to harness its full potential.