What is my security responsibility in the cloud?

This is one of the most common question our team hears— from someone who is considering moving to the cloud, all the way to a cloud veteran.  Luckily, the answer to this question is very simple. The model for security in any cloud is the same. It is the shared responsibility model.

You share the responsibility for the security of your deployment with your cloud service provider.

shared-security-responsibilty (all)

AWS is an extremely secure infrastructure that delivers a highly- secure cloud infrastructure and includes a set of comprehensive security controls. Simply put, AWS is responsible for the infrastructure and for managing the security of the cloud. You, as the customer, are responsible for securing everything that is in the cloud.

The benefit of being in the cloud is that you are only responsibly for everything above the hypervisor level. You have full ownership and control of all your data, applications and operating system, but this also means you are responsible for securing them.

shared_responsibility

AWS provides helpful guidance at aws.amazon.com/security on the shared responsibility model. Trend Micro can help you you ensure that you meet your responsibilities. Here at Trend we can help you with securing everything you put into the cloud. Trend Micro Deep Security seamlessly integrates with AWS, so your workloads will be protected with minimal impact on performance.

Enterprises are running their workloads across complex, hybrid infrastructures, and need solutions that provide full-stack, 360-degree visibility to support rapid time to identify and resolve security threats.

Trend Micro Deep Security offers seamless integration with Sumo Logic’s data analytics service to enable rich analysis, visualizations and reporting of critical security and system data. This enables an actionable, single view across all elements in an environment.

 

Download slides

DevOps.com sat down with Trend Micro’s Mark Nunnikhoven, VP Cloud Research to discuss hybrid cloud, hybrid cloud security, DevSecOps and more.

 TRANSCRIPT

Alan Shimel: Hi, this is Alan Shimel, DevOps.com, here for another DevOps chat, and I’m joined today by Mark Nunnikhoven of Trend Micro. Mark, welcome to DevOps Chat.

Nunnikhoven: Thanks for having me, Alan.

Shimel:  I hope I didn’t mangle your name too bad, did I?

Nunnikhoven: No, it was perfect. It’s a tricky one. It looks really menacing, but ends up being relatively simple after you’ve heard it a few hundred times, so –

Shimel: Absolutely, very good. So, Mark, I asked you to join us today—I think we had done a webinar and talked a bit offline about, you know, some of the challenges around securing hybrid cloud and, you know, versus obviously people moving stuff to the cloud, people still have stuff in their data center, and what’s good for one may not necessarily be good for the other, but what’s good for both might be an entirely different thing. And I wanted to just spend a little bit of time with you talking about the challenges of securing the hybrid cloud.

Nunnikhoven: Yeah, it’s definitely an interesting topic that I don’t think gets enough attention, so I’m glad we can have this chat.

Shimel: Yep. Before we jump into it, though, Mark, I’d be remiss if I didn’t give you a chance to just let people know, I did mention you’re with Trend Micro, but what is your role there?

Nunnikhoven: Yeah, I’m the vice president in charge of cloud research. So I look at how organizations are tackling security in the cloud, in hybrid environments for, you know, traditional servers and workloads, containers, serverless, sort of anything that’s breaking new ground. It’s a lot of fun and there’s a lot of really interesting stuff happening.

Shimel: Great. So, Mark, you know, I think I started us off by saying there’s people who are making solutions for the cloud, there’s people making solutions for the data center, the ARM RAM. And what’s good for one is not necessarily good for the other, and then, of course, most of the world is living in some sort of hybrid environment.

Nunnikhoven: Yeah.

Shimel: What do we do?

Nunnikhoven: [Laughs] That’s a great question, ’cause, you know, I’ve seen—working with organizations around the world, I’ve seen sort of all of the possible scenarios, I think at least the major ones that people kind of come to thinking might work, and they sort of range from we’re going to force data center processes and controls into the cloud, which is a recipe for disaster. You know, they’re very different environments, to the opposite, of we’re gonna take everything that’s working in the cloud and we’re gonna force it on the data center, to, you know, we’re gonna run two completely different environments in completely different ways, and essentially double the workload. None of those is really satisfactory. Sort of the best route forward tends to be looking at what’s working in the cloud and trying to push a lot of those concepts into the data center. But you still need to make adjustments for the reality that the data center is very, you know, sort of slow and very high level of rigor and very formulaic, and, you know, very well-established. So you do need to make adjustments, but, you know, the goal is to get one way of doing things with allowances for the difference in both environments.

Shimel: Yep. And Mark, as you said, that may be the goal, but, you know, from day one, you obviously—you need a plan to get there. And can you maybe share with our audience a little bit, where do you see the logical steps, if you will, to getting there?

Nunnikhoven: Yeah, and I think that’s a really good way to phrase it, you know, because we can all—we all love the architecture Powerpoints and you go the events and you get real excited about what the eventual design and implementation will look like. But getting there can be really ugly and really messy. I think the biggest thing, and it’s unfortunately the thing that people tend to forget about, is looking at not just the security, but the operations processes and controls that you’re using today, and why you’re using them. And I think that’s the big piece that people miss, is, you know, why am I using something like an intrusion prevention system, or why am I using anti-malware or why do I have this change review board for any sort of operational change I want to make. And once you start to dig into the why you do these things, you realize that the implementation can change, as long as you’re still meeting that why.

So, you know, we put intrusion prevention in place so that we make sure that any time we open up traffic to this server, that we verified it is actually secure web traffic like we’re expecting. Once you know that’s the reason, then you can look and say in my data center, that makes sense to have a big perimeter appliance doing that job, whereas in the cloud, it makes more sense to have something running on the host, doing that same job. So you’re still getting the same answer to why. It’s just the how is different in each environment. So it’s—you’ve got to dig into those, and it starts to clarify what you’re doing.

Shimel: Got it. Mark, what are we seeing—I mean, what—obviously at Trend, you’re probably talking with a lot of customers. Are they coming from the cloud down with their solution? In other words, buying a cloud solution and building—or from the data center up, if you will?

Nunnikhoven: A lot of the time we’re seeing the data center up, simply because it’ll be one team within a larger organization. So we’re talking mainly, you know, traditional enterprise and large governments. And it’ll be one team that’s gone and said we’re gonna push everything and build it new in the cloud, and when they start to kick off that process, they start to get a lot of existing policy and existing culture enforced on them. So they’ll get, you know, specifically the security. They’ll get people saying, well, you need this, you need to make sure that, you know, you go across these boundaries in your design, you’re networking a certain way. So they start to try to take those same designs, which are great in the data center, but horrible in the cloud, and they try to push those into the cloud, and they usually end up failing. And then about six months afterwards realize that they need to take a new approach, because it is a completely different environment.

Shimel: Got it. And, you know what, Mark, I’m a firm believer that at least for the foreseeable future, five, 10 years, hybrid cloud is the—is the dominant, right, the dominant usage pattern that we’ll see from people. Is this yours as well?

Nunnikhoven: Yeah, 100 percent. And for me, it’s simple economics. As much as you want to be in the cloud, if that’s the case, if bought in even, you know, you’ll see it at the major events from the big cloud providers. They’ll pull up big enterprises who are—their CEO or the CIO is, you know, yelling from the top of their lungs that they’re gonna be all in on the cloud. Even in those scenarios, they’re still going to take 12 to 18 months to move out of their data centers.

But the reality is most enterprises have already made a multimillion dollar investment in technology that’s working. It might not be working as fast as they want, but you can’t walk away from that. The cost savings in the cloud, if they’re there, are not that significant that you’re gonna leave millions on the table. So the reality is, you know, for a standard data center life cycle, so like you said, a five to seven year term, you’re gonna be dealing with two environments. And it’s not just that it’s two environments, it’s two environments that are completely different. It’s very much apples and oranges. So you do need a different approach.

Shimel: Got it. And, Mark, I don’t want to put you on the spot, but what Trend Micro specifically, what kind of solutions are you guys, you know, working with customers on?

Nunnikhoven: Yeah, so one of our big push, and we’ll just hit it real brief, because we don’t want it to be a sales pitch at all, it’s just we have a platform called Deep Security, and we built it with the principle of trying to put security as close to the data and application as possible. And because of that, we’ve been able to adapt it for multiple cloud use, for data center use, for the hybrid scenario quite well, because we’ve taken a lot of the principles that work in the cloud and made it so that you can leverage them where it makes sense in the data center. So a lot of automation, a lot of programmability, and a lot of intelligence in the products so you don’t have to worry about the nitty-gritty of dealing with security day to day, unless it’s really urgent and requires you to intervene.

Shimel: Excellent. And, I mean, Trend is obviously not the only one doing this, Mark. When we talk about cloud too, I wanted to ask you this, public versus private clouds, where are you seeing this kind of—you know, are we—you know, to me, a private cloud is—goes hand in hand with hybrid, right? But there are people who are doing public cloud and data center, and that’s hybrid as well.

Nunnikhoven: Yeah, and I find for me, you know, private was a real big push back from a lot of the incumbent data center vendors, and, you know, we’ve kind of gotten out of that hype phase and into the reality of it. And I tend to see what a private cloud is, is somebody who’s taken the data center and adapted a lot of the cloud processes and the ideas behind a public cloud and implemented them within their data center, which can be a really good thing. So you can get on-demand resources, so you just query in API instead of sending out a ticket and getting a bunch of people to do work. You know, getting that accountability, getting that accessibility and that visibility that we’re used to in the public cloud, getting that in the data center.

So I think the ideal hybrid scenario is leveraging private cloud/public cloud. A lot of people aren’t there with their data center, because implementing change takes a long time. There’s a lot of cultural change, a lot of tooling change. So you’ll see them with sort of more the traditional data center approach. But if they want to be successful over the next few years, they need to start pushing more of those private cloud ideas internally, because that will help them have one set of processes and tools across both the public and the private cloud.

Shimel: Makes sense, makes sense. And Mark, what about people, though, who just say, you know what, I’m—I’m biting the bullet, I’m just gonna do cloud, you know, or I’m just gonna do data center. You know, first of all, nothing is forever, obviously, right? You could say that today, but things change. Is the Trend Micro solution, specifically, let’s say, is it one that will work with one, but that scales to the other? Or, in other words, how do we stay out of dead ends?

Nunnikhoven: Yeah, and specifically from us, this is something we’ve been dealing with with our customers over the last few years, because we’ve got customers who are 99 percent in the cloud and 1 percent in a data center somewhere, and the exact opposite. And it really—you know, every culture and enterprise is unique in what their blend and their mix of those environments is. So having the tool be agnostic to that is really important. So, you know, we’ll scale from protecting one system, to protecting 100,000 systems, with the same platform, very, very easily.

And I think there are a lot of projects out there, you know, not just commercial offerings from Trend Micro, but we’re seeing this a lot in the operational monitoring space. So, you know, things like Ryman or Logstaff, which are two great open source projects to help you correlate data, they work equally well in the cloud as they do in the data center, and I think that’s really a big win for people, is that you need to put the assets in the environment that makes the most sense, but you should be able to use modern tools in both. And that’s a real big, important point for people to take away, is that you want to be—even if you’re in a traditional sort of slowing evolving data center environment, you want to be making sure that you’re leveraging some of the great advances we’ve made in security and in operational tools.

Shimel: Excellent, excellent. Mark, we’re coming near the end of our time, as this always seems to go quick. But let’s talk a little bit about the research that you’ve been doing, and just give our audience maybe a little insight. What are you kind of researching now? What do you find really interesting?

Nunnikhoven: Yeah, the biggest area I’m focusing on right now is the rise of what’s been deemed the serverless architecture. So obviously sort of a poor choice of names. There are servers somewhere. But it’s the idea of leveraging a function as a service as something like Microsoft Azure functions or AWS Lambda, where you’ve just got code that you input that runs in a sandbox on someone else’s system and you don’t have to worry about any of that. So scaling and operational tasks are kind of out the door, but these are heavily leveraging SaaS services and you’re sort of picking the best aspects of multiple providers, and stitching it together into one big application. And there’s a lot of really interesting advantages for the business there, because you’re focusing purely on delivering value to your users.

But from a security perspective, now you’ve got multiple parties and systems that you have to trust, that have to work together, that have to share security controls, and then that is normally only one application in a suite of others. So you’ve got serverless applications, you’ve got applications that are running on containers, and then traditional applications. So I’ve been looking at not just the serverless, but how you do you address all of these in a consistent manner, so that you can apply security to the data where it’s being processed.

Shimel: Excellent. Mark, we are butting up against time here, but one last question, and it’s one I frequently ask our guests here on DevOps Chat. If you had to recommend one book to our audience to read, not necessarily the last book you read or anything like that, but one book they should read, what’s your recommendation?

Nunnikhoven: Yeah, that’s always a tough question. I love reading, so I’m always reading three or four books at a time. And, you know, it’s a safe assumption that this audience has already read “The Phoenix Project.” If they haven’t, they should. My latest favorite in this space is called “The Art of Monitoring.” It’s by James Turnbull, and it’s a look at how to implement metrics and measurements around running applications, regardless of where they are. So it talks about logging, it talks about real-time alerting, and it’s a really great approach, in that, you know, it’s always looking at results. So not just collecting data for data’s sake, but it sort of walks you through how all of this stuff should roll up to give your teams some actionable insight on how your application is doing. It’s a really great book.

Shimel: Got it, interesting. Well, Mark, thank you so much for being this episode’s guest, of DevOps Chat. We’d like to maybe have you back on sometime and talk to us about more of the research you’re doing. Continued success to you and Trend Micro, and just thanks again. This is Alan Shimel, for DevOps Chat, and thanks for joining us. Until next time.

Deep Security has some great out of the box integrations with AWS. Like our cloud connector that automatically manages your current inventory of EC2 instances within Deep Security. But did you know that Trend Micro’s Deep Security has an API that lets you integrate even further with AWS?

This API lets you automate some of the more repetitive tasks like user creation, permissions assignment, and account configuration.

Nobody likes to manually enter the same data over and over again. This post highlights a simple script that will automatically;

  • Create a new AWS IAM user that Deep Security can use
  • Create and assign the required IAM Role to assign the permissions required for Deep Security to see your EC2 instances
  • Configure your AWS account and all regions within Deep Security

This takes a several step process and turns it into a quick, repeatable command line tool. Watch the following video to see it in action;

 

Ok so isn’t that neat and easy? In order to execute you’ll need the following;

  • AWS CLI tools installed and configured with at least rights to create and modify users in IAM (If you don’t have this installed check out the following link: https://aws.amazon.com/cli/)
  • Ability to execute a BASH shell script
  • The script itself (see below)
The script is available in this GitHub gist: Now you can run the script with a command similar to the following: $ create-iam-cloudaccount <managerUsername> <managerUrl:port> Amazon <newAwsUserToCreate> (if needed <tenant ID>)

So If I was to use it on a newly created Deep Security Manager it might look like:

$ create-iam-cloudaccount administrator 10.X.X.X:443 Amazon DeepSecUser

It is really that easy.  This will attach to most Regions currently in AWS (currently Seoul is supported on only some versions of the manager) If you don’t want to sync every region you can remove some of them from line 33 in the script.

Now that you are synced, you can start to take advantage of all the great automated tasks that Deep Security can do that have been highlighted in previous and upcoming blog posts.  As always if you have any questions please reach out at aws@trendmicro.com and continue looking at www.trendmicro.com/aws for new blogs and security tricks in Amazon web Services.

 Post written by AWS Security Ninja: Zack Milem

 

As a principal architect at Trend Micro, focused on AWS, I get all the ‘challenging’ customer projects. Recently a neat use case has popped up with multiple customers and I found it interesting enough to share (hopefully you readers will agree).

The original question came as a result of queries about Deep Security’s SIEM output via syslog and how best to do an integration with Sumo Logic. Sumo has a ton of great guidance for getting a local collector installed and syslog piped through, but I was really hoping for something: a little less heavy at install time; a little more encrypted leaving the Deep Security Manager (DSM); and a LOT more centralized.

I’d skimmed an article recently about Sumo’s hosted HTTP collector which made me wonder – could I leverage Deep Security’s SNS event forwarding along with Sumo’s hosted collector configuration to get Events from Deep Security -> SNS -> Sumo?

With Deep Security SNS events sending well formatted json, could I get natural language query in Sumo Logic search without defining fields or parsing text? This would be a pretty short post if the answers were no… so let’s see how it’s done.

Step 1: Create an AWS IAM account

This account will be allowed to submit to the SNS topic (but have no other rights or role assigned in AWS).

NOTE: Grab the access and secret keys during creation as you’ll need to provide to Deep Security (DSM) later. You’ll also need the ARN of the user to give to the SNS Topic. (I’m going to guess everyone who got past the first paragraph without falling into an acronym coma has seen the IAM console so I’ll omit the usual screenshots.)

Step 2: Create the Sumo Logic Hosted HTTP Collector. Go to Manage-> Collection then “Add Collector”.

sns pic 1

Choose a Hosted Collector and pick some descriptive labels.

sns pic 2

NOTE: Make note of the Category for later

sns pic 3

Pick some useful labels again, and make note of the Source Category for the Collector (or DataSource if you choose to override the collector value). We’ll need that in a little while.

 Tip

When configuring the DataSource, most defaults are fine except for one: Enable Multiline Processing in default configuration will split each key:value from the SNS subscription into its own message. We’ll want to keep those together for parsing later, so have the DataSource use a boundary expression to detect message beginning and end, using this string (without the quotes) for the expression: (\{)(\})

sns pic 4

Then grab the URL provided by the Sumo console for this collector, which we’ll plug into the SNS subscription shortly.

sns pic 5

Step 3: Create the SNS topic.

Give it a name and grab the Topic ARN

sns pic 6

Personally I like to put some sanity around who can submit to the topic. Hit “Other Topic Actions” then “Edit topic policy”, and enter the ARN we captured for the new users above as the only AWS user allowed to publish messages to the topic.

sns pic 7

Step 4: Create the subscription for the HTTP collector.

Select type HTTPS for the protocol, and enter the endpoint shown by the Sumo Console

sns pic 8

Step 5: Go to search page in the Sumo Console and check for events from our new _sourceCategory:

sns pic 9

And click the URL in the “SubscribeURL” field to confirm the subscription.

Picture extra

Step 6: Configure the Deep Security Manager to send events to the topic

Now that we’ve got Sumo configured to accept messages from our SNS topic, the last step will be to configure the Deep Security Manager to send events to the topic.

 Log in to your Deep Security console and head to Administration -> System Settings -> Event Forwarding. Check the box for “Publish Events to Amazon Simple Notification Service and enter the Access and Secret key for the user we created with permission to submit to the topic then paste in the topic ARN and save.

sns pic10

You’ll find quickly that we have a whole ton of data from SNS in each message that we really don’t need associated with our Deep Security events. So let’s put together a base query that will get us the Deep Security event fields directly accessible from our search box:

_sourceCategory=Deep_Security_Events | parse “*” as jsonobject | json field=jsonobject “Message” as DSM_Log | json auto field=DSM_Log

sns pic 11

Much better. Thanks to Sumo Logic’s auto json parsing, we’ll now have access to directly filter any field included in a Deep Security event.

Let your event management begin!

Ping us on aws@trendmicro.com if you have any feedback or questions on this blog… And let us know what kind of dashboards your ops teams are using this for!

Park in a well lit area and check your car before getting in it!

It may only seem like advice a mother tells her child when they first start driving. But she does have a good point – checking for tampering and dangers gives us visibility and allows us to keep ourselves safe from threats.

The concept can very easily be extended to Information Security.  How? By giving yourself visibility and not leaving yourself in the dark.  Sometime you need to look a little deeper before making assumptions about threats.

This all sounds good, but how do we put this into play without creating an unnamable stream of data?  Let’s start out with one basic tool that is often overlooked; Integrity Monitoring.

Most of the time when organizations deploy Integrity Monitoring they do it because they want to meet a compliancy requirement.  Compliancy writes these requirements because by monitoring key parts of your system, it can point to potential security concerns. So now not only has your Mother been giving you security advice but so has your compliancy officer.  The issue here is that we could monitor everything and try to look at every change.  This however gets us back to the unmanageable stream of data that tends to get overlooked and not reviewed.

There are many sources that you can read that point out the advantages of monitoring key system locations:

These two resources point to key items to monitor in a system.  There are many more out there that a simple Google search will reveal.

Here are 5 points that are a good starting point for monitoring with a brief explanation:

  1. Files being dropped onto a system that could be remote tools or have other malicious intent. Most often these are dropped in locations that they can easily be overlooked or disguised, such as the recycle bin.
  2. Installation of new software.
  3. A new process or service is set to start up on reboot. This could indicate an attacker trying to create persistence on a system.
  4. New registry values. These could point to malicious software.
  5. System files being modified – attackers will often try to inject into existing systems to disguise their software.

Trend Micro’s Deep Security has the capability to monitor the integrity of key locations of your system.  Below are outlines of some of our base rule sets that should tie back to these 5 points.

By using some very basic Integrity Monitoring Rules you can easily identify some of these noted concern areas.

1005041 – Malware – Suspicious Microsoft Windows Files Detected

1005042 – Malware – Suspicious Microsoft Windows Registry Entries Detected

TMTR-0022: Suspicious Files Detected In Recycle Bin

TMTR-0002: Suspicious Files Detected In Operating System Directories

1002776 – Microsoft Windows – Startup Programs Modified

            This rule alerts when there is any change in file attributes of user Startup programs located under Profiles directories. The rule also monitors directory permissions of Startup Programs found under Profiles directory and modifications in the registries entries created by the Startup Programs and Winlogon.

The rule provides configuration options to select file attributes to monitor and also to enter files to ignore monitoring which were located under %ProfilesDir%\username\Start Menu\Programs\Startup.

1002778 – Microsoft Windows – System .dll or .exe files modified

            This rule alerts when there is a change in file attributes Created, LastModified, Permissions, Owner, Group, Size and Contents of .dll or .exe files under %WINDIR%\system32 path. Also, the rule provides a configuration option to ignore files for monitoring and to select the file attributes to monitor.

1006799 – TMTR-0014: Suspicious Service Detected

            Microsoft Windows – ‘Hosts’ file modified. This rule alerts when there is any change in file attributes of Windows ‘Hosts’ file found under %WINDIR%\system32\drivers\etc (e.g., C:\WINDOWS\system32\drivers\etc) directory.

TMTR-0016: Suspicious Running Processes Detected By implementing the simple rules above you will be able to gain insight into possible security concerns that could easily be over looked. So listen to your mother and park in a well lit area. Here is more detailed information for each of the rules listed.

1005041 – Malware – Suspicious Microsoft Windows Files Detected

Screen Shot 2016-06-15 at 4.22.56 PM

1005042 – Malware – Suspicious Microsoft Windows Registry Entries Detected

Screen Shot 2016-06-15 at 4.24.41 PM

1002776 – Microsoft Windows – Startup Programs Modified

Screen Shot 2016-06-15 at 5.11.20 PM Screen Shot 2016-06-15 at 5.11.48 PM

Written by Cloud Security Expert Tony Allgrati

This document describes how the joint AWS and Trend Micro Quick Start package addresses NIST SP 800-53 rev .4 Security Controls.

Trend Micro and AWS have included a matrix that can be sorted to show shared and inherited controls and how they are addressed. The matrix provides additional insight by mapping to Federal Risk an Authorization Management Program (FedRAMP) controls, NIST SP 800-171 (Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations) data protection requirements, the OMB Trusted Internet Connection (TIC) capability requirements as described in the FedRAMP TIC Overlay (pilot) program, the DoD cCloud Computing Security Requirements Guide (SRG), and the Committee on National Security Systems Instruction (CNSSI) No. 1253.

Download Security Control Matrix

Thumbnail_Security-Controls-Matrix

Trend Micro and AWS are excited to announce the release of a New Quick Start reference deployment. The new solution is part of AWS Professional Services Enterprise Accelerator – Compliance, a program that assists federal agencies and other customers to comply with the National Institute of Standard and Technology (NIST) high impact security control requirements (NIST SP 800-53(rev4)) in the AWS cloud. Check out AWS’ blog on the new quick start.


We presented alongside AWS at the AWS Public Sector Summit in Washington, D.C.

  • Leverage AWS and Trend Micro security controls to retain logs, control access to systems, monitor changes, and much more
  • Automate security using technologies like AWS CloudFormation

To access these new AWS Quick Starts, go to the Quick Start Reference Deployments main page, and scroll down to the Compliance, security, and identity management section