With the re-emergence of CI-CD (Continuous Integration – Continuous Deployment) as well as other software engineering techniques like No Patch environments and Blue/Green Deployments, teams are under immense pressure to quickly deliver working software with no downtime to customers.  Whether it’s pushing application updates in a streamlined fashion multiple times a day or redeploying new Azure VM’s with the code updates, an application control tool needs to be as flexible as the deployment it is protecting.

 

Deep Security with its Application Control module enables you to implement software changes in a dynamic way which enables your development team to create and deploy their software without the roadblocks of a security tool.

The first way Deep Security achieves this is with its implementation of Application Control.  When you first enable it, the host takes inventory of the virtual machine and automatically adds all software installed into its approved list.  Perfect for no patch and blue/green deployments, when your new virtual machines are built with the new code, they are automatically added as approved by Deep Security.  Gone are the days of adding every new build to an approved list before the code is pushed.

But, what if you are deploying new code and re-using existing Azure VMs?  The Maintenance Mode feature with API tie-in is the solution for this environment.  Maintenance mode allows a virtual machine to be patched or updated, while automatically adding any changes to its approved application list.  Because Deep Security has an open API architecture, you can add this maintenance step into your code deployment tool like Jenkins.

By using the following API call, you can turn on maintenance mode for x minutes just prior to doing the code deploy.

dsm.set_trusted_update_mode(hostID,minutes)

Here in the GUI, you can see that Maintenance mode has been turned on via the API call:

Also, within Deep Security under the “Actions” section, we give you a comprehensive list of applications that are running in an environment that have yet to be approved so you can quickly see changes that have occurred.  This gives you the ability to approve new programs being deployed or remove access from files which are deemed suspicious or malicious.

With Deep Security, you have the power of Application Control, along with our other security controls, all that be accessed programmatically to help your security be as dynamic and agile as your development teams.

For more information, please contact us at azure@trendmicro.com

 

The Deep Security team released support today for identity provider integration using SAML 2.0. When you integrate Deep Security with your identity provider, you no longer need to manage users, passwords, and MFA tokens in Deep Security. By offloading user management to your identity provider, you can also use features like password strength / change enforcement, one-time password (OTP), and two-factor / multi-factor authentication (2FA/MFA) with your existing policies. We have tested Deep Security SAML integration with Microsoft Windows Active Directory Federation Services (ADFS), Okta, PingOne, and Shibboleth.

This article will help you integrate your ADFS server with Deep Security. I’ve tested the instructions with ADFS 4.0 (Windows Server 2016), but you can set up the same configuration with older versions of ADFS back to ADFS 2.0.

You can follow the instructions in this article today if you have an account on Deep Security as a Service, Trend Micro’s hosted Deep Security solution. SAML support is coming soon to the AWS Marketplace and software releases starting with Deep Security 10.1.

Create a SAML Identity Provider and roles in Deep Security

The Deep Security Help Center has a great SAML single sign-on configuration article that will walk you through the steps to set up Deep Security to trust your ADFS server. You’ll need the federation metadata file from your ADFS server, which you can get from https://<your ADFS server name>/FederationMetadata/2007-06/FederationMetadata.xml.

Adding Deep Security as a relying party in ADFS

This is a quick-start blog post, so I won’t get into a lot of detail. There’s a link at the bottom of the article if you want the full reference documentation.

In this example we’ll use the user’s email address as the Deep Security user name (RoleSessionName in Deep Security SAML-speak).

We’re also going to use a handy trick here that lets us use Active Directory groups to manage the role claims that we issue. This trick uses two custom rules, one to extract the Active Directory group information and the second to transform the group information into claims.

To make this work, you’ll need to have a naming convention where your Active Directory group names can be transformed into Deep Security roles. In this example, we’ll use Active Directory group names in the format TMDS-<tenant ID>-<role name>, so TMDS-0123456789-READONLY would be transformed to the URN for the Deep Security role urn:tmds:identity:us-east-ds-1:0123456789:role/ADFS-READONLY.

To create these AD groups, you’ll need the identity provider and role URNs from the Create a SAML Identity Provider and roles in Deep Security procedure described earlier.

We’ll also create a rule that includes a PreferredLanguage claim that takes its value from the preferredLanguage LDAP attribute. This is optional and won’t do any harm if you don’t have this attribute set, but it can be handy if you have a diverse user base.

Finally, we’ll create a rule that includes a SessionDuration claim limiting the user’s sessions to a maximum of eight hours (28800 seconds). This claim attribute is also optional, and the Deep Security administrator can further limit session duration if they want.

Microsoft provides an ADFS Powershell cmdlet that lets you completely configure everything we need in a single command. Run this as admin on your ADFS server to set up Deep Security as a Service as a relying party for your ADFS. If you’re trying to integrate your own Deep Security installation, replace the Name and MetadataURL parameter values, and check to make sure the URNs in the Roles rule match what you have.

[bs_icon name=”glyphicon glyphicon-warning-sign”] Always read Powershell scripts carefully and make sure you understand what they do before running them. Also, be wary of copy & paste attacks! Always paste into a text editor and review what you’ve copied there before running the script.

Add-AdfsRelyingPartyTrust -Name 'Trend Micro Deep Security as a Service' -MetadataURL 'https://app.deepsecurity.trendmicro.com/saml' -IssuanceAuthorizationRules '@RuleTemplate="AllowAllAuthzRule" => issue(Type = "http://schemas.microsoft.com/authorization/claims/permit", Value="true");' -IssuanceTransformRules '@RuleTemplate = "LdapClaims"
@RuleName = "RoleSessionName"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
 => issue(store = "Active Directory", types = ("https://deepsecurity.trendmicro.com/SAML/Attributes/RoleSessionName"), query = ";mail;{0}", param = c.Value);

@RuleName = "Get AD Groups"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
 => add(store = "Active Directory", types = ("http://temp/variable"), query = ";tokenGroups;{0}", param = c.Value);

@RuleName = "Roles"
c:[Type == "http://temp/variable", Value =~ "(?i)^TMDS-([^d]+)"]
 => issue(Type = "https://deepsecurity.trendmicro.com/SAML/Attributes/Role", Value = RegExReplace(c.Value, "TMDS-([^d]+)-", "urn:tmds:identity:us-east-ds-1:$1:saml-provider/ADFS,urn:tmds:identity:us-east-ds-1:$1:role/ADFS-"));

@RuleName = "Session Duration"
 => issue(Type = "https://deepsecurity.trendmicro.com/SAML/Attributes/SessionDuration", Value = "28800");

@RuleTemplate = "LdapClaims"
@RuleName = "Preferred Language"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
 => issue(store = "Active Directory", types = ("https://deepsecurity.trendmicro.com/SAML/Attributes/PreferredLanguage"), query = ";preferredLanguage;{0}", param = c.Value);'

That’s it!

Well, close to, anyway. You’ll still need to set up groups in Active Directory that match the pattern you defined (in the examples above, TMDS-<tenant ID>-<role name>) and assign users to those groups, but then you’re done!

References

As more and more organizations are starting to realize, hybrid cloud is already happening and will continue to evolve as we strive to find better, faster and more efficient ways to store and share data. Not unlike the great cities of our world, we often see old and new side by side – the ancient architectures of yesterday nestled next to the futuristic glass skyscrapers of tomorrow.

When it comes to securing your on-premise and virtual environments it may seem like you’ve got it all figured out, but what happens as we move along the server evolution and bring environments like the cloud and containers into the mix? In an effort to be agile and cost efficient many organizations are using these new environments but may not have the protection to match.

Bridging the hybrid cloud

We are very excited to announce the release of Deep Security 10 powered by XGen™ security. Deep Security 10 continues to embrace the challenge of hybrid cloud, delivering enhancements designed to give you even more visibility across all of your environments—physical, virtual, cloud, and now containers. You’re working to leverage these environments to support your business – and that business needs to be protected.

The first step is visibility. With the new smart folders feature, applications that span different infrastructures can be treated as one using a smart attribute-based grouping system. Now you can manage applications across vastly different infrastructure platforms as if they were one, be it physical, virtual or cloud.

Next, let’s talk about layered security.  Deep Security 10 is powered by XGen™ Security, a blend of cross-generational threat defense techniques. Deep Security leverages server-centric threat defense techniques from tried and true technologies like intrusion prevention, anti-malware, and application control right up to the most leading threat defense techniques like sandbox analysis, machine learning and behavioral analysis to guard against the most sophisticated threats.

New in Deep Security 10 we introduce behavioral monitoring capabilities, which can identify changes in installed software and/or changes in system files. These enhanced protection capabilities for Windows environments including new ransomware capabilities, protection against unauthorized encryption, and new real-time memory scanning, combine to ensure a more advanced layered security protection across Windows environments and your entire hybrid cloud.

This new release adds many integration and management enhancements, including faster connection and time to protection for Azure workloads, along with support for the latest Azure account format, Azure Resource Manager v2 (ARM). It also expands beyond server workloads to protect Docker containers, leveraging proven techniques like anti-malware, IPS and application control to protect dynamic container deployments.

Security that fits your environment, and your team.

Deep Security 10 has at its core the support for flexible deployment, hybrid policy management, support for auto-scaling, and blue/green deployments. We understand how to secure the long-standing physical servers, right up to the ephemeral servers living for mere minutes or even seconds in the cloud. This includes consumption-based licensing options for truly dynamic workloads that you can find in the Azure Marketplace and by using our Deep Security as a Service product. No matter how you manage security, Deep Security is designed to support the traditional IT security model or the latest DevSecOps – or both!

Stay tuned for the general availability of Deep Security 10 this March, and be sure to check back here often for new updates and releases about your favorite hybrid cloud security tool for Azure!

 

Breaking down gateway and host-based security approaches in the cloud.

For most organizations, moving to the cloud has become a requirement, not an option. In the cloud, many of the same security controls are required but how they are delivered needs to change in order to fit into the agile cloud environment and DevOps methodologies. In the data center, security controls are delivered at the perimeter either using hardware security appliances for Firewall or Intrusion detection & prevention (IDS/IPS); or delivered through software (agents) on the servers themselves, such as Anti-Malware or File Integrity Monitoring. In the cloud, security is a shared responsibility which means that both the cloud provider and the cloud user share responsibilities for providing protection. Cloud providers like Azure provide immense physical data center security and processes, which is great for cloud users as it takes a lot of work off their plate. But it also means that cloud users can’t bring the hardware firewall or IPS devices to the cloud as they don’t have access to the physical servers. That leaves two options for controls like IPS:

  1. Gateway or virtual appliance
  2. Host-based with security software (agent) running on each workload

To get a better idea of the different approaches let’s dive into an example of IDS/IPS architecture in the cloud, as it is one of the security controls that most organizations have and it is often required for compliance.

 

Intrusion Detection and Prevention (IDS/IPS) Overview

Intrusion Detection Systems (IDS) were the first generation of network security controls. A reactive control, it would alert you when a breach or attack occurred so you could investigate. Intrusion Prevention Systems (IPS) overtook IDS in popularity because of the ability to proactively block attacks, not just react to them. IDS/IPS systems for data centers were network-based and consisted of dedicated hardware appliance with the performance and throughput being based on the size of the network interface, CPU and memory.

Virtual Appliance (Gateway) Approach

Using the security virtual appliance deployment model there are two methods in which IDS/IPS can be used. Method 1 requires software to be deployed to each instance in order to send a copy of the network traffic to the appliance for inspection. Method 2 requires routing configuration changes to be made in order for the security virtual appliance to inspect the traffic. Figure 1 illustrates both deployment scenarios.

  Figure 1: Security Virtual appliance

 Host-based Security Approach

The other option is to deploy software (also known as an agent) onto each workload. This allows the security polices to be tailored to the specific software executing on that server. This removes the need to have generic or extraneous rules running and taking up resources. For instance, with Trend Micro Deep Security you can run a Recommendation Scan that quickly determines the specific rules needed to protect the instance, depending on the OS or patches applied. Additionally, the deployment of security software and policies can be automated for environments with auto-scaling requirements with configuration management tools such as Chef, Puppet or OpsWorks. This approach is illustrated in Figure 2. A host-based fits seamlessly with your existing deployment workflow.

   

Figure 2: Host-based IPS from Deep Security

 Comparing Approaches

One of the biggest architectural problems with network-based IDS/IPS is the use of encryption to protect network traffic. This security practice protects the contents of network traffic but it makes it difficult or impossible to analyze traffic and detect exploits. With host-based IDS/IPS, network encryption is less of an issue as the host decrypts traffic before it is analyzed. The following is a summary comparison of the different methods, which can be used to deploy IDS/IPS protection for cloud instances.

  Virtual Appliance (Method 1 Inline) Virtual Appliance (Method 2 Tap) Host-based Security
Scaling Parallel to the workload In proportion to the workload With the workload
Protection Detect Generic protection Customized protection
Summary

Although both security virtual appliances and host-based software can be used to deliver IDS/IPS in the cloud, there is a strong argument that a host-based approach is easier and more cost effective.

  • Host-based security can be deployed with automation tools like Chef or Puppet.
  • Host-based controls seamlessly auto-scale to thousands of instances without requiring additional virtual appliances or managers to be added.
  • A host-based approach reduces resource requirements as there is no duplication of traffic and no specialized instance is required for a virtual appliance.
  • Eliminates the requirement to configure SSL/TLS to decrypt and analyze network traffic.

Host-based security enable controls and policies to be customized for each workload.

In the past, IPS and IDS have only been defined in terms of one versus the other. While each offer their own unique attributes, the key to success may lie within a blend of the two. So how can you decipher what is offered with the two intrusion defense tools? In this article, we breakdown the main differences between IPS and IDS and how you can leverage their capabilities to protect your workloads.
IPS vs IDS
Intrusion Prevention System (IPS) is a security control tool; it inspects traffic for vulnerabilities and exploits within the network stream and can remove packets before they reach your applications. It can also act as a protocol enforcing tool by ensuring each packet you are accepting is correct for that application. For example, you can allow any HTTPS packet that comes in on 443, but also block any non-HTTPS packets like SSH on the same port. This allows you do to additional enforcement of the traffic for multiple protocols on the same network port. Intrusion Detection System (IDS) is a visibility tool; it can alert an administrator on patterns within traffic while allowing the traffic to pass through to the application. This allows you to create IDS rules to give additional information about the traffic being accepted into your environment. For example, you might have an IDS rule to inspect SSL ciphers from servers communicating with you to ensure they are following compliance mandates and security policies. It is a powerful tool for giving in depth information without impacting your applications.
Layering Your Security
Ideally, you want to use both technologies within your environment. This allows you to use the IPS functions to protect your workloads from vulnerabilities and exploits which gives an additional layer of security within your environment. An IDS is helpful for monitoring and investigation within your environment without downtime to users or applications. This allows administrators to build additional IPS policies based on the information displayed within the IDS to keep your environment protected. Having a tool like Deep Security which can be configured as an IPS, IDS or both is extremely useful in implementing security controls and removing vulnerabilities as well as giving you real-time information about traffic patterns and policies. Each rule within Deep Security can be configured either in Prevent (IPS) or Detect (IDS) giving you granular control of your security posture while still allowing your applications to run without impact. Combine this with our recommendation scan technology and your network security has now become context aware matching the correct IPS and IDS rules to the operating system and applications running on your workload. Deep Security only applies the rules which you need within your environment keeping your performance costs low. Intrusion detection and prevention are valuable security tools, especially for cloud workloads when you can easily spin up a workload that’s visible to the world. By using Deep Security, you can add another layer of network security controls and visibility to your security arsenal.

Information Technology Management or simply IT Management is a broad term and there are many disciplines tied to it e.g. configuration management, service management, security management to name a few.

The one thing that is constant in life is CHANGE! The Information Technology world is no different, the digital transformation started decades ago but the speed at which these changes are happening now shows no sign of slowing down, it’s on rapid acceleration. It is just a start to the world of everything-is-connected-to-everything-else.

db5-skyfall

There was a reason when James Bond (In Skyfall) stopped by a garage, telling M that they would have to switch cars and opened the door to reveal the Aston Martin DB5 (1963), and they drove together to Scotland, M complaining the car was uncomfortable, and Bond jokingly threatening to use the ejector seat. Confused, why I’m bringing this here? The MI6 cars all have trackers!

This digital transformation is revolutionizing our businesses today because when you start migrating from the analog to the digital world you get your hands on the information that you didn’t have before! The growing volume of information collected as part of this transformation, as we call it “big-data”, opens up new set of opportunities for businesses that didn’t exist before.  Look at digital thermostat transformations these days (think nest and the likes). The businesses can now learn your temperature adjusting patterns and with this data at hand now they can create new business opportunities/model for them e.g. this information is used by the energy companies to plan and adjust their power plants capacity, peak rate billings and so forth.

These innovations and advancements in technology to create our connected world present new challenges to the management solutions and the need to have true “Single Pane of Glass” concept is a must in your IT strategy but is there a single pane of glass solution that can equip today’s information technology professionals with the tools they need to succeed?

Since this blog post is written for our Azure site, you have guessed it right, I’m talking about Microsoft Operations Management Suite (aka OMS). It is Microsoft’s cloud-based IT management solution. There are four solution areas offered under OMS;

OMS-offerings

We will focus our discussion for this blog to the Insight & Analytics offering. The Log Analytics helps you collect, correlate, search, and act on log and performance data generated by operating systems, network devices and applications, simply put you can collect and analyze machine data from virtually any source.

If this is indeed true, then let’s see how we can leverage OMS log Analytics service and bring Trend Micro Deep Security event data inside OMS to help identify and resolve security threats. The good news is Trend Micro Deep Security offers seamless integration with OMS data analytics service, thanks to OMS agent.

Architecture Components of OMS – Log Analytics

Before we look into how to integrate Trend Micro Deep Security with the OMS log analytic service, I like to share with you the architecture components of OMS.

OMS Repository is the key component of OMS; it is hosted in the Azure cloud. Data is collected into the repository from connected sources by configuring data sources and adding solutions to your subscription.

OMS Agent a customized version of the Microsoft Monitoring Agent (MMA). You need to install and connect agents for all of the computers that you want to onboard to OMS in order for them to send data to OMS.

Connected-Sources   Connected Sources are the locations where data can be retrieved for the OMS repository.

Data sources run on Connected Sources and define the specific data that’s collected. Data sources run on Connected Sources and define the specific data that’s collected.

Integration of Deep Security with OMS – Log Analytics

Now that we got some basic understanding about the components of OMS – Log Analytics Service, let’s see how the integration of Deep Security with OMS – Log Analytics works?

The integration of Deep Security with OMS is very simple; Deep Security can write its event data in CEF/syslog to one of the OMS connected sources and then they can be collected by the Syslog data source on this OMS Linux agent.  The one thing you need to know about OMS agent and syslog support is that either rsyslog or syslog-ng are required to collect syslog messages.

There are three main steps to this integration as illustrated here:

DS Integration

I’m going to skip the part of installation and configuring a syslog data source here, assuming you already know this part. However, when it comes to event forwarding choices in Deep Security, there are two integration options available to configure Deep Security Solution to forward security events to the OMS connected source.

Relay via Deep Security ManagerThis option sends the syslog messages from the Deep Security Manager after events are collected on heartbeats

Direct Forward This option sends the security events/messages in real time directly from the Agents

This choice decision is dependent on your deployment model, network design/topology, your bandwidth and policy design. The simplest and often used choice is to use Relay via Manager, as shown below.

01-Relay-via-Manager

Once the event data is collected and available in OMS you can leverage log searches where you construct queries to analyze collected data.  The raw syslog/CEF data that is sent by Deep Security to OMS can be extracted by using OMS’s Custom Field feature, This feature utilizes FlashExtract, a program-synthesis technology developed by Microsoft Research, to extract the fields you teach it to recognize.

syslog-data-extract

syslog-data-extract-example1

It’s little work upfront to extract fields of interest but once it’s is done then all your custom fields are now searchable field which you can use to aggregate & group data etc.

DS-Firewall-View-3 DS-Log-Query

The last thing that I want to touch base on is Designer View feature. With View Designer, you can create visualizations and dashboards using the event data available in OMS as Views e.g. you can use Custom Fields to build your view for what matters to you the most.

DS-Firewall-View

DS-Firewall-View-2  DS-Firewall-View-3 So here you have it, now you can use Deep Security to protect your workloads running across complex, hybrid infrastructures and use OMS to gain control and visibility to identify and resolve security threats rapidly. No need to worry about having multiple tools and interfaces,  and most importantly without needing to spend valuable time on software setup and complex integration options, thanks to OMS – all-in-one cloud IT management solution.  

Now that public cloud is a generally accepted, and expected, path for enterprises dealing with large amounts of applications and data, it’s up to cloud solution providers and partners to make it that much easier to provide a quick and painless transition to the cloud. Thanks to the introduction of the Azure Quickstart Templates at Microsoft Ignite 2016, customers are given the tools they need to truly embrace the possibilities of the cloud.

The cloud security manager’s balancing act

Many enterprises are facing common challenges in cloud security, a lack of both time and expertise. The speed of cloud means that needs are constantly changing, and having the time and expertise in your team can be an overwhelming challenge. Time is your most valuable resource and without the right expertise, your time is spent on trying to improve your IT infrastructure can be fruitless.

Partnering to help customers embrace cloud

We often run into situations where our customers want to launch multiple products from the Azure Marketplace, but the time that it takes to do this can be daunting, not to mention the level of technical expertise required to ensure that all settings and variables are appropriately entered. That’s where Azure Quickstart Templates come in.

Azure Quickstart Templates are ARM templates created by trusted Microsoft partners and designed to help save you time when looking to get started with integrated, multi-artifact solutions on Azure. Quickstart Templates are helpers and learning tools that you can customize to meet your unique business needs.

The goal of the Trend Micro Quickstart is to reduce the amount of time and technical expertise required to launch numerous products within an Azure account, integrating pre-configured products like Deep Security, Chef and Splunk with a single click. See the Trend Micro Quickstart here.

Azure-quickstart-trend-chef-splunk

How will Azure Quickstart Templates help you?

  • Save time: In under and hour, you can deploy fully integrated solution stacks to suit your enterprise needs. Reducing the time and manpower needed to test and launch your cloud project.
  • Provide Expertise: The heavy lifting is done for you, the Quickstart Templates help automate much of the manual work that would be otherwise normally be involved in stitching artifacts together.
  • Speed Time to Value: Whether you’re looking for a proof of concept, or are ready to start your cloud project or migration, deploying the Trend Micro Quickstart will give you an enterprise-ready environment so that you can start using your environment immediately.

For a more technical overview and steps to get started, you can read my colleague Saif’s blog and check out his video here.

Ready to get started on your own, visit the Azure Quickstart Template website to learn more.

Technology advancements such as high-speed Internet connectivity, ability to create abstract layers in computing environments has allowed us to achieve things that were unthinkable ten years ago. I have personally experience these advancements in my professional life. At one point, I was happy to get my hands on virtualization and ability to run integrated solution from my laptop and an external portable storage device for my work. Now, with the eruption of cloud computing combined with the power of orchestration tools is mind blowing. We have entered into an-era where we are looking to automate everything aka Infrastructure as a Code.

Today, I’m happy to talk about our “Azure quickstart template”, let’s get started and get to the technical details.

What makes this quickstart template?

This integrated stack consists of Trend Micro Deep Security, Splunk Enterprise and Chef automation platform, all running on Azure.

Azure-quickstart-trend-chef-splunk

How is this quickstart template created?

This integrated stack is built using a JSON template, the template is based on Microsoft Azure Resource Manager (ARM) templates. With ARM templates, we can deploy topologies quickly, consistently with multiple services along with their dependencies. You do it once and consume it many times. It’s pretty powerful, since we by working with our Azure partner did for you already, you can simply consume.

What’s in it for me?

It saves you a lot of time that you can spend on things that matters to you, I don’t know perhaps watching a hockey game (yes, I’m Canadian and it’s our sport). Seriously, thing about it, if you are familiar with these solutions then you must be aware of it that each element has various components such as web based management application, database etc. and requires specific communication paths, database schema etc. To setup this type of environment where you have a fully functional integrated components would take you at least couple of hours and this estimate is by assuming your three-year-old daughter is not screaming in the background and you have your full attention and focused time to build this up.  I don’t know but I love when someone else can do part of my job and make it easy for me.

I’m with you tell me more.

Okay, if you are sill reading then I like to think I have your attention so let’s look at this diagram on what is involved here to give more technical details about this quick start;

To break it up, we have;

  • A storage account in the resource group.
  • A Virtual Network (vNet) with four subnets
  • Virtual Machines to host solution components
  • Network security groups to control what communication paths are allowed
  • Azure SQL DB to host Deep Security persistent data
  • Three test Virtual Machines; 2 VMs (Linux, Windows) with bootstrap scripts to install TrendMicro agents (through Azure VM extensions) and 1 VMs (Linux) with bootstrap scripts to install Chef Agents

There is a lot happening here as you can see and the only thing you need to do as part of consuming this is to provide some values for the template parameters such as;

  • Where you want to deploy this stack.
  • Web application administrators account and Virtual machine administrator account credentials for the various stack components.
  • Communication ports for Deep Security
  • Virtual machine size and number of test virtual machines

It takes roughly 30-45 minutes or so to have this environment fully functional. At the end, we will return the URL’s for each solution component (Trend Micro, Splunk and Chef) to you so that you can go ahead and simply login to these applications and do what ever you wanted to do e.g. protecting your Azure based workloads from various vulnerabilities, remember cloud security is a shared responsibility i.e. although cloud providers deliver an extremely secure environment but you need to protect what you put IN the cloud—your workloads.

quickstart-diagram

I’m sold, where can I get this template for quick start?

That was easy! The ARM template is available on the Azure website (here). You can simply click the “Deploy to Azure” button on or select Browse in GitHub repository. You can also use PowerShell, Azure CLI etc. to start the deployment, the GitHub link provides necessary documentation for it.

The Chef recipe for Deep Security Agent is available here.

Questions? Reach out to us by email at azure@trendmicro.com and we’ll be happy to help!