If your company deals with payment card data, you need to go through a long and cumbersome certification process outlined by the Payment Card Industry (PCI). The PCI Data Security Standard (PCI DSS) requires annual audits to ensure appropriate security controls and processes are being used for any applications or services that deal with sensitive customer data.

If your applications are hosted in the cloud, PCI compliance can be easier – as long as you choose the right service provider. Infrastructure as a Service (IaaS) providers like AWS have Level 1 PCI DSS certification. This means they care for many aspects of physical data center security that you would otherwise be responsible for.

As of version 3 of the standard, if you use third party Software as a Service (SaaS) offerings, they are included in the scope of your PCI audit!

This means if you are using SaaS offerings for log management, monitoring or security, they need to be PCI DSS certified, even if the service doesn’t directly deal with cardholder data.

Today we’re happy to announce that Trend MicroTM Deep Security as a ServiceTM is now a PCI DSS Level 1 Service Provider! This means you can further streamline your PCI DSS certification process and take more items off of your to do list.

Customers like Matchmove have relied on Deep Security to address critical requirements for PCI such as Intrusion Prevention (11.4), Integrity Monitoring (11.5), Anti-malware (5.1) and many more. And now Trend Micro customers can further simplify and accelerate compliance with Deep Security as a Service, by removing the cost and effort of running the security management stack.

For more details on how to accelerate PCI DSS compliance in AWS – check out this blog from April 2016.

Or try Deep Security as a Service for yourself with our 30 day free trial.

If you have questions or comments, email us at aws@trendmicro.com.

Coming from a background in enterprise architecture and operations dominated by Microsoft products, many of my former coworkers are surprised how much I use Linux OSs. I spend an absurd amount of time in a bash prompt on my local OSX box, prefer Deep Security Manager deployments on Amazon Linux or RedHat, and of course support customers deploying our Deep Security Agent on a dizzying array of distros and versions. I’ve long held the opinion that there is a best tool for any given job, and while I still think Microsoft products are best suited for some tasks, in the public cloud a Linux OS is the best tool for many of my jobs. In fact, I suggest a non-MS solution so often that many of my coworkers are shocked to find that I’ve ever logged into a Windows machine at all!

These days even Microsoft is supporting the reality of a cross platform world in the cloud with great Linux support in Azure, open source contribution bringing openssh to Win32, and open sourcing powershell while porting it to OSX and Linux. If Microsoft can deliver Desired State Configuration for Linux, I think I can shed my bash-full-ness and admit to my latent MS predilection…

I love PowerShell. I could go on for quite a while about having dynamic typing, object oriented features, pipelining, the ability to use any .NET capability on the fly, great exception handling, sophisticated remoting, rich integrated contextual help ….. Ah.. where was I?

Oh right.. The point! Why this matters to Deep Security customers.

I churn out quite a bit of scripting against the Deep Security APIs and am regularly asked how I knock these out so fast, and why so often they’re in PowerShell (especially for someone who spends the vast majority of days ‘Windows-less’). The answer is: PowerShell lets me cheat.

Before we get to the API itself, one bit of housekeeping – if you’re running a self-signed certificate on the Deep Security Manager or don’t have the full CA chain available on your local box, we’ll need to tell .NET to ignore your SSL indiscretion.

[System.Net.ServicePointManager]::ServerCertificateValidationCallback={$true}

* Note the pattern in which we’re referencing a .NET class inside <code>[]</code> square brackets, then assigning a value to a property of that class. This will be important later on.

With the pesky SSL detail out of the way, let’s dive in.

Find your WSDL URL (and enable the SOAP API!) in the Deep Security Manager console at ‘Administration -> System Settings -> Advanced’.

If you’re using Deep Security As a Service you won’t find this button but can take my word for it; the URL is 'https://app.deepsecurity.trendmicro.com:443/webservice/Manager?WSDL'
.

pic 1 powershell

The first bit of cheating with PowerShell is found in the New-WebServiceProxy cmdlet. I’ll let you wade through the full docs, but in short this cmdlet lets you take a remote WSDL and create a proxy to a .NET framework object representing the service. We’re going to generate the proxy, stash a reference, and [very important for later] we’re going to specify a Namespace so that we can find all out new types easily from now on.

$manager = app.deepsecurity.trendmicro.com

$Global:DSMSoapService = New-WebServiceProxy -uri "https://$manager/webservice/Manager?WSDL" -Namespace "CheatingWithPowershell" -ErrorAction Stop

Great – now what?

[Insert the sound of evil laughter here]

With a .NET Namespace that represents the Deep Security Manager’s Web Service (which you might want to rename from my example), every class and enum exposed by the web service is available with tab completion and typed returns. We’ll use the proxy first to create an instance of the ManagerService class, which has all of the methods for doing useful things with the interface. Then after configuring username and password variables (preferably prompting for password input with Read-Host -AsSecureString where appropriate), we’ll log into the Manager and get a token to use for subsequent calls.

$Global:DSM = New-Object CheatingWithPowershell.ManagerService

$Global:SID = $DSM.authenticate($user, $password)

If you’re using Deep Security As a Service similar to our example above, we’ll need to add a tenant parameter:

$Global:SID = $DSM.authenticateTenant($tenant, $user, $password)

… And now the good stuff…

How about an array of every computer object visible in the manager? (careful with this one – I really do mean ALL your hosts)

$allhosts = $DSM.hostRetrieveAll($SID)

Then echo your $allhosts to see what you’ve got.

$allhosts

Or, if you just pulled many tens of thousands of machines, you might prefer to take a peak at just the first HostTransport object in the array.

$allhosts[0]

We’ve pulled a bunch of data from the manager about our protected machines but what if we want to do something with a specific machine? Most of the methods operate on the internal unique identifier for a given host object, but I for one don’t know the table index of every host in my manager, so let’s get an individual machine by hostname, and while we’re at it let’s get a bit more detail.

$hostdetails = $DSM.hostDetailRetrieveByName("ec2-54-149-116-109.us-west-2.compute.amazonaws.com", [CheatingWithPowershell.EnumHostDetailLevel]::HIGH, $SID)

$hostdetails

I’m sure you’ve noticed the return of the [NAMEPSACE.CLASS]::PROPERTY design, but this time we’re referencing an enum. That seems like a pain to remember, but did I mention that since PowerShell has access to the namespace, we’ve got tab completion for these types? Check it out inline…[Cheating<tab>.EnumH<tab>]:: … very slick!

But how do I know what arguments are accepted by hostDetailRetrieveByName(STRING hostname, EnumHostDetailLevel detail, STRING sid)?

Or better asked, since I’m doing this all the time, how would you know? I guess you could read the documentation (and on the record, I definitely encourage you to do so!), but that hardly seems like cheating. Enter the PowerShell ISE. Let’s take our authentication example above, put it into a script, and add some error handling, and run the script. Notice that thanks to param handling, we get tab completion for our argument names or prompted for mandatory values if not supplied.

<#
.SYNOPSIS
Cheating with Powershell sample SDK for Deep Security Manager SOAP API
.LINK
https://www.trendmicro.com/aws
#>
param (
   [Parameter(Mandatory=$true)][string]$manager,
   [Parameter(Mandatory=$true)][string]$user,
   [Parameter(Mandatory=$false)][string]$tenant
)
$passwordinput = Read-host "Password for Deep Security Manager" -AsSecureString
$password = [Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($passwordinput))
[System.Net.ServicePointManager]::ServerCertificateValidationCallback={$true}
$Global:DSMSoapService = New-WebServiceProxy -uri "https://$manager/webservice/Manager?WSDL" -Namespace "CheatingWithPowershell" -ErrorAction Stop
$Global:DSM = New-Object CheatingWithPowerShell.ManagerService
$Global:SID
try {
   if (!$tenant) {
     $Global:SID = $DSM.authenticate($user, $password)
   }
else {
   $Global:SID = $DSM.authenticateTenant($tenant, $user, $password)
   }
}
catch {
   echo "An error occurred during authentication. Verify username and password and try again. `nError returned was: $($_.Exception.Message)"
   exit
}

Once executed within the ISE, our editor now knows wonderful things about the interface (by which I really mean everything about the interface). See what happens now when we start to type $DSM.  in the editor pane.

powershell pic 2

How about that hostDetailRetrieveByName method?

powershell pic 3

Intellisense here gives us not only access to methods within a class, but also properties on objects.

Take for example our $hostdetails object, a HostDetailTransport[].

Powershell pic 4

This way it’s incredibly easy to see what properties are available and their types. Seems likely some of those properties could be used directly within a method call to retrieve more objects, but that will have to wait for another time.

Can’t wait to see what you come up with. Look for the CheatingWithPowerShell.ps1 and other samples soon at ‘https://github.com/deep-security/‘, and in the meantime check out our Python SDK and other integration projects.

Drop us a line via aws@trendmicro.com for functionality you’d like to see in the next sample script, or to let us know what automation you’ve come up with!

P.S.

Almost forgot – Deep Security Manager will restrict the total number of concurrent sessions allowed for a user, so I will signoff this post in the most appropriate way I can imagine:

$DSM.endSession($SID)

Written by: Bryan Webster

For most organizations, moving to the cloud has become a requirement, not an option. In the cloud, many of the same security controls are required but how they are delivered needs to change in order to fit into the agile cloud environment and DevOps methodologies. In the data center, security controls are delivered at the perimeter either using hardware security appliances for Firewall or Intrusion detection & prevention (IDS/IPS); or delivered through software (agents) on the servers themselves, such as Anti-Malware or File Integrity Monitoring. In the cloud, security is a shared responsibility which means that both the cloud provider and the cloud user share responsibilities for providing protection. Cloud providers like AWS provide immense physical data center security and processes, which is great for cloud users as it takes a lot of work off their plate. But it also means that cloud users can’t bring the hardware firewall or IPS devices to the cloud as they don’t have access to the physical servers. That leaves two options for controls like IPS:
  1. Gateway or virtual appliance
  2. Host-based with security software (agent) running on each workload
  To get a better idea of the different approaches let’s dive into an example of IDS/IPS architecture in the cloud, as it is one of the security controls that most organizations have and it is often required for compliance.  

Intrusion Detection and Prevention (IDS/IPS) Overview

Intrusion Detection Systems (IDS) were the first generation of network security controls. A reactive control, it would alert you when a breach or attack occurred so you could investigate. Intrusion Prevention Systems (IPS) overtook IDS in popularity because of the ability to proactively block attacks, not just react to them. IDS/IPS systems for data centers were network-based and consisted of dedicated hardware appliance with the performance and throughput being based on the size of the network interface, CPU and memory.  

Virtual Appliance (Gateway) Approach

Using the security virtual appliance deployment model there are two methods in which IDS/IPS can be used. Method 1 requires software to be deployed to each instance in order to send a copy of the network traffic to the appliance for inspection. Method 2 requires routing configuration changes to be made in order for the security virtual appliance to inspect the traffic. Figure 1 illustrates both deployment scenarios.   FIgure 1 - hostvgateway   Figure 1: Security Virtual appliance  

Host-based Approach

The other option is to deploy software (also known as an agent) onto each workload. This allows the security polices to be tailored to the specific software executing on that server. This removes the need to have generic or extraneous rules running and taking up resources. For instance, with Trend Micro Deep Security you can run a Recommendation Scan that quickly determines the specific rules needed to protect the instance, depending on the OS or patches applied. Additionally, the deployment of security software and policies can be automated for environments with auto-scaling requirements with configuration management tools such as Chef, Puppet or OpsWorks. This approach is illustrated in Figure 2. A host-based fits seamlessly with your existing deployment workflow.   Figure 2 - hostvgateway   Figure 2: Host based IPS from Deep Security    

Comparing Approaches

One of the biggest architectural problems with network-based IDS/IPS is the use of encryption to protect network traffic. This security practice protects the contents of network traffic but it makes it difficult or impossible to analyze traffic and detect exploits. With host-based IDS/IPS, network encryption is less of an issue as the host decrypts traffic before it is analyzed. The following is a summary comparison of the different methods, which can be used to deploy IDS/IPS protection for cloud instances.      
  Virtual Appliance (Method 1 Inline) Virtual Appliance (Method 2 Tap) Host-based
Scaling Parallel to the workload In proportion to the workload With the workload
Protection Detect Generic protection Customized protection

Summary

Although both security virtual appliances and host-based software can be used to deliver IDS/IPS in the cloud, there is a strong argument that a host-based approach is easier and more cost effective.
  • Host-based security can be deployed with automation tools like Chef or Puppet.
  • Host-based controls seamlessly auto-scale to thousands of instances without requiring additional virtual appliances or managers to be added.
  • A host-based approach reduces resource requirements as there is no duplication of traffic and no specialized instance is required for a virtual appliance.
  • Eliminates the requirement to configure SSL/TLS to decrypt and analyze network traffic.
  • Host-based controls enable security controls and policies to be customized for each workload.
 

What is my security responsibility in the cloud?

This is one of the most common question our team hears— from someone who is considering moving to the cloud, all the way to a cloud veteran.  Luckily, the answer to this question is very simple. The model for security in any cloud is the same. It is the shared responsibility model.

You share the responsibility for the security of your deployment with your cloud service provider.

shared-security-responsibilty (all)

AWS is an extremely secure infrastructure that delivers a highly- secure cloud infrastructure and includes a set of comprehensive security controls. Simply put, AWS is responsible for the infrastructure and for managing the security of the cloud. You, as the customer, are responsible for securing everything that is in the cloud.

The benefit of being in the cloud is that you are only responsibly for everything above the hypervisor level. You have full ownership and control of all your data, applications and operating system, but this also means you are responsible for securing them.

shared_responsibility

AWS provides helpful guidance at aws.amazon.com/security on the shared responsibility model. Trend Micro can help you you ensure that you meet your responsibilities. Here at Trend we can help you with securing everything you put into the cloud. Trend Micro Deep Security seamlessly integrates with AWS, so your workloads will be protected with minimal impact on performance.

Enterprises are running their workloads across complex, hybrid infrastructures, and need solutions that provide full-stack, 360-degree visibility to support rapid time to identify and resolve security threats.

Trend Micro Deep Security offers seamless integration with Sumo Logic’s data analytics service to enable rich analysis, visualizations and reporting of critical security and system data. This enables an actionable, single view across all elements in an environment.

 

Download slides

DevOps.com sat down with Trend Micro’s Mark Nunnikhoven, VP Cloud Research to discuss hybrid cloud, hybrid cloud security, DevSecOps and more.

 TRANSCRIPT

Alan Shimel: Hi, this is Alan Shimel, DevOps.com, here for another DevOps chat, and I’m joined today by Mark Nunnikhoven of Trend Micro. Mark, welcome to DevOps Chat.

Nunnikhoven: Thanks for having me, Alan.

Shimel:  I hope I didn’t mangle your name too bad, did I?

Nunnikhoven: No, it was perfect. It’s a tricky one. It looks really menacing, but ends up being relatively simple after you’ve heard it a few hundred times, so –

Shimel: Absolutely, very good. So, Mark, I asked you to join us today—I think we had done a webinar and talked a bit offline about, you know, some of the challenges around securing hybrid cloud and, you know, versus obviously people moving stuff to the cloud, people still have stuff in their data center, and what’s good for one may not necessarily be good for the other, but what’s good for both might be an entirely different thing. And I wanted to just spend a little bit of time with you talking about the challenges of securing the hybrid cloud.

Nunnikhoven: Yeah, it’s definitely an interesting topic that I don’t think gets enough attention, so I’m glad we can have this chat.

Shimel: Yep. Before we jump into it, though, Mark, I’d be remiss if I didn’t give you a chance to just let people know, I did mention you’re with Trend Micro, but what is your role there?

Nunnikhoven: Yeah, I’m the vice president in charge of cloud research. So I look at how organizations are tackling security in the cloud, in hybrid environments for, you know, traditional servers and workloads, containers, serverless, sort of anything that’s breaking new ground. It’s a lot of fun and there’s a lot of really interesting stuff happening.

Shimel: Great. So, Mark, you know, I think I started us off by saying there’s people who are making solutions for the cloud, there’s people making solutions for the data center, the ARM RAM. And what’s good for one is not necessarily good for the other, and then, of course, most of the world is living in some sort of hybrid environment.

Nunnikhoven: Yeah.

Shimel: What do we do?

Nunnikhoven: [Laughs] That’s a great question, ’cause, you know, I’ve seen—working with organizations around the world, I’ve seen sort of all of the possible scenarios, I think at least the major ones that people kind of come to thinking might work, and they sort of range from we’re going to force data center processes and controls into the cloud, which is a recipe for disaster. You know, they’re very different environments, to the opposite, of we’re gonna take everything that’s working in the cloud and we’re gonna force it on the data center, to, you know, we’re gonna run two completely different environments in completely different ways, and essentially double the workload. None of those is really satisfactory. Sort of the best route forward tends to be looking at what’s working in the cloud and trying to push a lot of those concepts into the data center. But you still need to make adjustments for the reality that the data center is very, you know, sort of slow and very high level of rigor and very formulaic, and, you know, very well-established. So you do need to make adjustments, but, you know, the goal is to get one way of doing things with allowances for the difference in both environments.

Shimel: Yep. And Mark, as you said, that may be the goal, but, you know, from day one, you obviously—you need a plan to get there. And can you maybe share with our audience a little bit, where do you see the logical steps, if you will, to getting there?

Nunnikhoven: Yeah, and I think that’s a really good way to phrase it, you know, because we can all—we all love the architecture Powerpoints and you go the events and you get real excited about what the eventual design and implementation will look like. But getting there can be really ugly and really messy. I think the biggest thing, and it’s unfortunately the thing that people tend to forget about, is looking at not just the security, but the operations processes and controls that you’re using today, and why you’re using them. And I think that’s the big piece that people miss, is, you know, why am I using something like an intrusion prevention system, or why am I using anti-malware or why do I have this change review board for any sort of operational change I want to make. And once you start to dig into the why you do these things, you realize that the implementation can change, as long as you’re still meeting that why.

So, you know, we put intrusion prevention in place so that we make sure that any time we open up traffic to this server, that we verified it is actually secure web traffic like we’re expecting. Once you know that’s the reason, then you can look and say in my data center, that makes sense to have a big perimeter appliance doing that job, whereas in the cloud, it makes more sense to have something running on the host, doing that same job. So you’re still getting the same answer to why. It’s just the how is different in each environment. So it’s—you’ve got to dig into those, and it starts to clarify what you’re doing.

Shimel: Got it. Mark, what are we seeing—I mean, what—obviously at Trend, you’re probably talking with a lot of customers. Are they coming from the cloud down with their solution? In other words, buying a cloud solution and building—or from the data center up, if you will?

Nunnikhoven: A lot of the time we’re seeing the data center up, simply because it’ll be one team within a larger organization. So we’re talking mainly, you know, traditional enterprise and large governments. And it’ll be one team that’s gone and said we’re gonna push everything and build it new in the cloud, and when they start to kick off that process, they start to get a lot of existing policy and existing culture enforced on them. So they’ll get, you know, specifically the security. They’ll get people saying, well, you need this, you need to make sure that, you know, you go across these boundaries in your design, you’re networking a certain way. So they start to try to take those same designs, which are great in the data center, but horrible in the cloud, and they try to push those into the cloud, and they usually end up failing. And then about six months afterwards realize that they need to take a new approach, because it is a completely different environment.

Shimel: Got it. And, you know what, Mark, I’m a firm believer that at least for the foreseeable future, five, 10 years, hybrid cloud is the—is the dominant, right, the dominant usage pattern that we’ll see from people. Is this yours as well?

Nunnikhoven: Yeah, 100 percent. And for me, it’s simple economics. As much as you want to be in the cloud, if that’s the case, if bought in even, you know, you’ll see it at the major events from the big cloud providers. They’ll pull up big enterprises who are—their CEO or the CIO is, you know, yelling from the top of their lungs that they’re gonna be all in on the cloud. Even in those scenarios, they’re still going to take 12 to 18 months to move out of their data centers.

But the reality is most enterprises have already made a multimillion dollar investment in technology that’s working. It might not be working as fast as they want, but you can’t walk away from that. The cost savings in the cloud, if they’re there, are not that significant that you’re gonna leave millions on the table. So the reality is, you know, for a standard data center life cycle, so like you said, a five to seven year term, you’re gonna be dealing with two environments. And it’s not just that it’s two environments, it’s two environments that are completely different. It’s very much apples and oranges. So you do need a different approach.

Shimel: Got it. And, Mark, I don’t want to put you on the spot, but what Trend Micro specifically, what kind of solutions are you guys, you know, working with customers on?

Nunnikhoven: Yeah, so one of our big push, and we’ll just hit it real brief, because we don’t want it to be a sales pitch at all, it’s just we have a platform called Deep Security, and we built it with the principle of trying to put security as close to the data and application as possible. And because of that, we’ve been able to adapt it for multiple cloud use, for data center use, for the hybrid scenario quite well, because we’ve taken a lot of the principles that work in the cloud and made it so that you can leverage them where it makes sense in the data center. So a lot of automation, a lot of programmability, and a lot of intelligence in the products so you don’t have to worry about the nitty-gritty of dealing with security day to day, unless it’s really urgent and requires you to intervene.

Shimel: Excellent. And, I mean, Trend is obviously not the only one doing this, Mark. When we talk about cloud too, I wanted to ask you this, public versus private clouds, where are you seeing this kind of—you know, are we—you know, to me, a private cloud is—goes hand in hand with hybrid, right? But there are people who are doing public cloud and data center, and that’s hybrid as well.

Nunnikhoven: Yeah, and I find for me, you know, private was a real big push back from a lot of the incumbent data center vendors, and, you know, we’ve kind of gotten out of that hype phase and into the reality of it. And I tend to see what a private cloud is, is somebody who’s taken the data center and adapted a lot of the cloud processes and the ideas behind a public cloud and implemented them within their data center, which can be a really good thing. So you can get on-demand resources, so you just query in API instead of sending out a ticket and getting a bunch of people to do work. You know, getting that accountability, getting that accessibility and that visibility that we’re used to in the public cloud, getting that in the data center.

So I think the ideal hybrid scenario is leveraging private cloud/public cloud. A lot of people aren’t there with their data center, because implementing change takes a long time. There’s a lot of cultural change, a lot of tooling change. So you’ll see them with sort of more the traditional data center approach. But if they want to be successful over the next few years, they need to start pushing more of those private cloud ideas internally, because that will help them have one set of processes and tools across both the public and the private cloud.

Shimel: Makes sense, makes sense. And Mark, what about people, though, who just say, you know what, I’m—I’m biting the bullet, I’m just gonna do cloud, you know, or I’m just gonna do data center. You know, first of all, nothing is forever, obviously, right? You could say that today, but things change. Is the Trend Micro solution, specifically, let’s say, is it one that will work with one, but that scales to the other? Or, in other words, how do we stay out of dead ends?

Nunnikhoven: Yeah, and specifically from us, this is something we’ve been dealing with with our customers over the last few years, because we’ve got customers who are 99 percent in the cloud and 1 percent in a data center somewhere, and the exact opposite. And it really—you know, every culture and enterprise is unique in what their blend and their mix of those environments is. So having the tool be agnostic to that is really important. So, you know, we’ll scale from protecting one system, to protecting 100,000 systems, with the same platform, very, very easily.

And I think there are a lot of projects out there, you know, not just commercial offerings from Trend Micro, but we’re seeing this a lot in the operational monitoring space. So, you know, things like Ryman or Logstaff, which are two great open source projects to help you correlate data, they work equally well in the cloud as they do in the data center, and I think that’s really a big win for people, is that you need to put the assets in the environment that makes the most sense, but you should be able to use modern tools in both. And that’s a real big, important point for people to take away, is that you want to be—even if you’re in a traditional sort of slowing evolving data center environment, you want to be making sure that you’re leveraging some of the great advances we’ve made in security and in operational tools.

Shimel: Excellent, excellent. Mark, we’re coming near the end of our time, as this always seems to go quick. But let’s talk a little bit about the research that you’ve been doing, and just give our audience maybe a little insight. What are you kind of researching now? What do you find really interesting?

Nunnikhoven: Yeah, the biggest area I’m focusing on right now is the rise of what’s been deemed the serverless architecture. So obviously sort of a poor choice of names. There are servers somewhere. But it’s the idea of leveraging a function as a service as something like Microsoft Azure functions or AWS Lambda, where you’ve just got code that you input that runs in a sandbox on someone else’s system and you don’t have to worry about any of that. So scaling and operational tasks are kind of out the door, but these are heavily leveraging SaaS services and you’re sort of picking the best aspects of multiple providers, and stitching it together into one big application. And there’s a lot of really interesting advantages for the business there, because you’re focusing purely on delivering value to your users.

But from a security perspective, now you’ve got multiple parties and systems that you have to trust, that have to work together, that have to share security controls, and then that is normally only one application in a suite of others. So you’ve got serverless applications, you’ve got applications that are running on containers, and then traditional applications. So I’ve been looking at not just the serverless, but how you do you address all of these in a consistent manner, so that you can apply security to the data where it’s being processed.

Shimel: Excellent. Mark, we are butting up against time here, but one last question, and it’s one I frequently ask our guests here on DevOps Chat. If you had to recommend one book to our audience to read, not necessarily the last book you read or anything like that, but one book they should read, what’s your recommendation?

Nunnikhoven: Yeah, that’s always a tough question. I love reading, so I’m always reading three or four books at a time. And, you know, it’s a safe assumption that this audience has already read “The Phoenix Project.” If they haven’t, they should. My latest favorite in this space is called “The Art of Monitoring.” It’s by James Turnbull, and it’s a look at how to implement metrics and measurements around running applications, regardless of where they are. So it talks about logging, it talks about real-time alerting, and it’s a really great approach, in that, you know, it’s always looking at results. So not just collecting data for data’s sake, but it sort of walks you through how all of this stuff should roll up to give your teams some actionable insight on how your application is doing. It’s a really great book.

Shimel: Got it, interesting. Well, Mark, thank you so much for being this episode’s guest, of DevOps Chat. We’d like to maybe have you back on sometime and talk to us about more of the research you’re doing. Continued success to you and Trend Micro, and just thanks again. This is Alan Shimel, for DevOps Chat, and thanks for joining us. Until next time.

Deep Security has some great out of the box integrations with AWS. Like our cloud connector that automatically manages your current inventory of EC2 instances within Deep Security. But did you know that Trend Micro’s Deep Security has an API that lets you integrate even further with AWS?

This API lets you automate some of the more repetitive tasks like user creation, permissions assignment, and account configuration.

Nobody likes to manually enter the same data over and over again. This post highlights a simple script that will automatically;

  • Create a new AWS IAM user that Deep Security can use
  • Create and assign the required IAM Role to assign the permissions required for Deep Security to see your EC2 instances
  • Configure your AWS account and all regions within Deep Security

This takes a several step process and turns it into a quick, repeatable command line tool. Watch the following video to see it in action;

 

Ok so isn’t that neat and easy? In order to execute you’ll need the following;

  • AWS CLI tools installed and configured with at least rights to create and modify users in IAM (If you don’t have this installed check out the following link: https://aws.amazon.com/cli/)
  • Ability to execute a BASH shell script
  • The script itself (see below)
The script is available in this GitHub gist: Now you can run the script with a command similar to the following: $ create-iam-cloudaccount <managerUsername> <managerUrl:port> Amazon <newAwsUserToCreate> (if needed <tenant ID>)

So If I was to use it on a newly created Deep Security Manager it might look like:

$ create-iam-cloudaccount administrator 10.X.X.X:443 Amazon DeepSecUser

It is really that easy.  This will attach to most Regions currently in AWS (currently Seoul is supported on only some versions of the manager) If you don’t want to sync every region you can remove some of them from line 33 in the script.

Now that you are synced, you can start to take advantage of all the great automated tasks that Deep Security can do that have been highlighted in previous and upcoming blog posts.  As always if you have any questions please reach out at aws@trendmicro.com and continue looking at www.trendmicro.com/aws for new blogs and security tricks in Amazon web Services.

 Post written by AWS Security Ninja: Zack Milem

 

As a principal architect at Trend Micro, focused on AWS, I get all the ‘challenging’ customer projects. Recently a neat use case has popped up with multiple customers and I found it interesting enough to share (hopefully you readers will agree).

The original question came as a result of queries about Deep Security’s SIEM output via syslog and how best to do an integration with Sumo Logic. Sumo has a ton of great guidance for getting a local collector installed and syslog piped through, but I was really hoping for something: a little less heavy at install time; a little more encrypted leaving the Deep Security Manager (DSM); and a LOT more centralized.

I’d skimmed an article recently about Sumo’s hosted HTTP collector which made me wonder – could I leverage Deep Security’s SNS event forwarding along with Sumo’s hosted collector configuration to get Events from Deep Security -> SNS -> Sumo?

With Deep Security SNS events sending well formatted json, could I get natural language query in Sumo Logic search without defining fields or parsing text? This would be a pretty short post if the answers were no… so let’s see how it’s done.

Step 1: Create an AWS IAM account

This account will be allowed to submit to the SNS topic (but have no other rights or role assigned in AWS).

NOTE: Grab the access and secret keys during creation as you’ll need to provide to Deep Security (DSM) later. You’ll also need the ARN of the user to give to the SNS Topic. (I’m going to guess everyone who got past the first paragraph without falling into an acronym coma has seen the IAM console so I’ll omit the usual screenshots.)

Step 2: Create the Sumo Logic Hosted HTTP Collector. Go to Manage-> Collection then “Add Collector”.

sns pic 1

Choose a Hosted Collector and pick some descriptive labels.

sns pic 2

NOTE: Make note of the Category for later

sns pic 3

Pick some useful labels again, and make note of the Source Category for the Collector (or DataSource if you choose to override the collector value). We’ll need that in a little while.

 Tip

When configuring the DataSource, most defaults are fine except for one: Enable Multiline Processing in default configuration will split each key:value from the SNS subscription into its own message. We’ll want to keep those together for parsing later, so have the DataSource use a boundary expression to detect message beginning and end, using this string (without the quotes) for the expression: (\{)(\})

sns pic 4

Then grab the URL provided by the Sumo console for this collector, which we’ll plug into the SNS subscription shortly.

sns pic 5

Step 3: Create the SNS topic.

Give it a name and grab the Topic ARN

sns pic 6

Personally I like to put some sanity around who can submit to the topic. Hit “Other Topic Actions” then “Edit topic policy”, and enter the ARN we captured for the new users above as the only AWS user allowed to publish messages to the topic.

sns pic 7

Step 4: Create the subscription for the HTTP collector.

Select type HTTPS for the protocol, and enter the endpoint shown by the Sumo Console

sns pic 8

Step 5: Go to search page in the Sumo Console and check for events from our new _sourceCategory:

sns pic 9

And click the URL in the “SubscribeURL” field to confirm the subscription.

Picture extra

Step 6: Configure the Deep Security Manager to send events to the topic

Now that we’ve got Sumo configured to accept messages from our SNS topic, the last step will be to configure the Deep Security Manager to send events to the topic.

 Log in to your Deep Security console and head to Administration -> System Settings -> Event Forwarding. Check the box for “Publish Events to Amazon Simple Notification Service and enter the Access and Secret key for the user we created with permission to submit to the topic then paste in the topic ARN and save.

sns pic10

You’ll find quickly that we have a whole ton of data from SNS in each message that we really don’t need associated with our Deep Security events. So let’s put together a base query that will get us the Deep Security event fields directly accessible from our search box:

_sourceCategory=Deep_Security_Events | parse “*” as jsonobject | json field=jsonobject “Message” as DSM_Log | json auto field=DSM_Log

sns pic 11

Much better. Thanks to Sumo Logic’s auto json parsing, we’ll now have access to directly filter any field included in a Deep Security event.

Let your event management begin!

Ping us on aws@trendmicro.com if you have any feedback or questions on this blog… And let us know what kind of dashboards your ops teams are using this for!