File Security
What Can Generative AI do for Hybrid Cloud Security?
As enterprise security operations centers absorb cloud security functions, they face new challenges and require new skills. Generative AI can help by laying a secure cloud foundation and empowering SOC teams to respond effectively when threats arise.
Global organisations will spend about $600 billion on public cloud services in 2023—if anyone needs further proof of cloud’s prime position in enterprise IT. The sweep of cloud adoption means a whole lot of modern enterprise IT infrastructure is defined in code to maximise scalability and other key benefits. Since infrastructure is the traditional domain of security operations centres (SOCs), ensuring a secure cloud is fast becoming a SOC responsibility. Trend Micro predicts most SOCs will absorb cloud security by 2026.
This comes with some challenges. Few corporate cloud deployments are consolidated and straightforward: most are a mix of hybrid and multi-cloud. And securing cloud resources is very different from securing physical infrastructure. While SOC teams will surely onboard cloud security specialists, there are limits to how far headcounts can scale. All of which suggests it’s inevitable that new tools will be needed, and one of the most powerful may prove to be generative AI.
Of course, today many generative AI use cases for hybrid cloud security are still hypothetical since the technology is in its nascent stages, albeit evolving rapidly. But as generative AI matures, using it for cloud security is bound to become standard practise.
Setting a secure cloud foundation
Just as coding companions like Amazon CodeWhisperer and GitHub Copilot are helping developers write software, generative AI can be used to produce template-based infrastructure-as-code as the foundation for consistent, best-practise, secure cloud environments.
Someday AI companions may be reliable enough to do this on their own, but for now ‘hybrid AI’ workarounds are needed to close the trust gap. After a cloud AI companion creates cloud infrastructure, a non-AI tool can scan it for threats, conformance with corporate policies and industry best practises, and for alignment with cloud provider frameworks such as the AWS Well-Architected Framework.
Running this kind of quality control check before any AI-generated infrastructure is deployed in the build pipeline ensures code is valid and free of errors or misconfigurations that could cause security issues. Scanning for misconfigurations is especially important, as they count for up to 70% of all cloud security challenges.
There aren’t yet integrated solutions that can do all these steps—though they are definitely on the way—but all the necessary functions are available to SOC teams with today’s generative AI capabilities and cybersecurity solutions like cloud security posture management (CSPM).
Empowering SOC teams to respond to cloud threats
Separate from producing more inherently secure cloud infrastructure out of the gate, cloud AI companions will also help SOC teams deliver results even without deep or extensive cloud expertise.
For example, a SOC analyst could ask a generative AI companion plain-language questions about the enterprise’s hybrid cloud infrastructure and its potential vulnerabilities. The AI would then translate those into cloud-specific queries about which resources can access which data, possible instances of misconfiguration, and more—and report back the findings in a tidy, nearly instantaneous summary.
Sample query
Gathering this kind of knowledge would help analysts sniff out potential risks and also improve their understanding of the cloud infrastructure and the developer intentions behind it.
When concerns arise about a specific threat, generative AI could be used to very quickly establish where exactly the enterprise environment is most at risk. Again, SOC analysts could ask natural-language questions and the AI could translate those into code-based queries to return actionable findings.
Cloud AI security in action
As a practical example, imagine the SOC team sees content is being scraped from a public-facing AWS S3 bucket. It could ask the generative AI companion to evaluate the activity. Does it fit with how that bucket was intended to be used? What level of risk does the activity represent?
If it turns out there is an active threat and the underlying code needs fixing, the AI companion could generate corrective code and upload it to the dev team’s repository as a pull request for review and implementation. Here’s how that might look:
Combining natural language querying with this kind of playbook execution would make less experienced analysts more productive, accelerate the SOC team’s ability to ramp up with new tools, and close the loop between incident response and infrastructure development.
Communication counts when it comes to cloud security
When Gartner rolled out its CNAPP concept in 2021 (CNAPP is short for ‘cloud-native application protection platform’), its aim was to better integrate development and runtime security—with good reason. As a recent Trend Micro blog post put it, better alignment between development teams and security teams produces more secure apps with fewer development headaches. “An important element of establishing a strong DevOps culture,” the blog says, “is using security tools that help security teams see all the bad stuff as quickly as possible, enabling devs to build resilient apps faster.”
Having cloud AI companions generate code that SOC teams can offer to infrastructure developers would take the CNAPP concept to a whole new level, integrating SOC actions seamlessly into the tools developers already use.
The cloud AI potential is real
While today’s generative AI companions can respond to natural-language queries and vet and propose code, using those queries to help SOC analysts better understand hybrid cloud infrastructure and developers’ intentions is still a ways off. So is having cloud AI propose code to address identified security issues.
But given the speed at which AI seems to move, “a ways off” may not in fact be so remote, and the impacts of realising these use cases could be profound. Cloud AI companions stand to speed up infrastructure creation and threat assessments, enable non-cloud specialists to solve security problems, and reduce friction between dev and security teams—all of which contributes to stronger cybersecurity.
Next steps
For more Trend Micro insights into cloud AI security, check out these resources: