With the re-emergence of CI-CD (Continuous Integration – Continuous Deployment) as well as other software engineering techniques like No Patch environments and Blue/Green Deployments, teams are under immense pressure to quickly deliver working software with no downtime to customers.  Whether it’s pushing application updates in a streamlined fashion multiple times a day or redeploying new Azure VM’s with the code updates, an application control tool needs to be as flexible as the deployment it is protecting.

 

Deep Security with its Application Control module enables you to implement software changes in a dynamic way which enables your development team to create and deploy their software without the roadblocks of a security tool.

The first way Deep Security achieves this is with its implementation of Application Control.  When you first enable it, the host takes inventory of the virtual machine and automatically adds all software installed into its approved list.  Perfect for no patch and blue/green deployments, when your new virtual machines are built with the new code, they are automatically added as approved by Deep Security.  Gone are the days of adding every new build to an approved list before the code is pushed.

But, what if you are deploying new code and re-using existing Azure VMs?  The Maintenance Mode feature with API tie-in is the solution for this environment.  Maintenance mode allows a virtual machine to be patched or updated, while automatically adding any changes to its approved application list.  Because Deep Security has an open API architecture, you can add this maintenance step into your code deployment tool like Jenkins.

By using the following API call, you can turn on maintenance mode for x minutes just prior to doing the code deploy.

dsm.set_trusted_update_mode(hostID,minutes)

Here in the GUI, you can see that Maintenance mode has been turned on via the API call:

Also, within Deep Security under the “Actions” section, we give you a comprehensive list of applications that are running in an environment that have yet to be approved so you can quickly see changes that have occurred.  This gives you the ability to approve new programs being deployed or remove access from files which are deemed suspicious or malicious.

With Deep Security, you have the power of Application Control, along with our other security controls, all that be accessed programmatically to help your security be as dynamic and agile as your development teams.

For more information, please contact us at azure@trendmicro.com

 

[Editors note: For the latest WannaCry information as it relates to Trend Micro products, please read this support article.] 

Information on Latest Ransomware Attacks

Two weeks and counting since the initial exposure of the WannaCry Ransomware outbreak, and organizations are still feeling the effects of the attacks. With over 230,000 global users already infected and the emergence of new attacks like UIWIX and EternalRocks, the gravity of the situation is becoming increasingly evident. To keep you up to date, we are consistently providing new information on the latest ransomware threats through our Simply Security blog There you can find a breakdown on the attacks as well as present and future impacts of exposure.

Prevention and Support

Looking to prevent WannaCry using Trend Micro products? Visit out support page for detailed procedures on protecting yourself and your business.

The Reality of Patching

The WannaCry ransomware variant of 12-May-2017 has been engineered to take advantage of the most common security challenges facing large organizations today, unpatched vulnerabilities. It’s not uncommon for it to take 100 days or more for organizations to deploy a patch. Why? The answer is rarely straightforward and differs depending on the objectives and responsibilities of an organization. Read WannaCry & The Reality of Patching for an in depth look into updating legacy systems, costs of patching vs breach and mitigation strategies.

Beyond WannaCry

While WannaCry will soon be a thing of the past, ransomware attacks will continue to be a part of the future. With over 1.5B ransomware attacks in 2016, it is clear now that in 2017 we will continue to see exponential growth. Proactively securing your business is the only way to defend against potential breaches. Luckily, you are not alone. Watch our webinar, with VP Cloud Research Mark Nunnikhoven, as he walks through the new threats and vulnerabilities that could put you at risk. Mark covers UIWIX and EternalRocks as well as all the vulnerabilities associated with the ShadowBrokers leak to help you better understand what is going on and how to deal with this situation. The information presented here will help you better communicate to your board or boss what the current situation is with respect to all of these threats.

Defending the Hybrid Cloud: Top 5 Challenges and How to Succeed

Watch the Secure World webinar with three industry experts, Johan Hybinette, CISO at Vonage, Mike Muscatel, Information Security Manager at Snyder’s-Lance, Inc, and Steve Neville, Director, Hybrid Cloud Security at Trend Micro.

This webcast addresses challenges like:

  • Multiple environments, multiple tools – complexity abounds
  • Understanding where cloud provider responsibilities end and where yours begins…and how to deal with it
  • Traditional security approaches don’t fit the cloud
  • Advanced threats, including ransomware, are a growing concern
  • Compliance
 

This year at RSA 2017, we caught up with VP Cloud Research Mark Nunnikhoven to get his insights on trends and challenges the modern security team is facing and the steps we can take towards a more secure and layered approach to hybrid cloud security. Whether you’re moving to cloud, your DevOps team is feeling the pressure of security responsibilities, or you can’t determine if the latest “silver bullet” solution is what you really need, Mark provides the answers to your burning security questions.

Here are some great takeaways from the interview to help you answer your hybrid cloud security questions.

More and more we see DevOps teams feeling overwhelmed by the challenges of increasing responsibilities with fewer resources. What can your security vendor do to help balance the load?

There is currently an overload in messaging from all vendors, but you have to can’t rely on one thing and have to look to your security best practices. Advanced techniques like machine learning can help, but if you can catch a problem with a simple check of what’s known good or known bad, why wouldn’t you go for the simpler solution?

“Machine learning” is the newest fad in security, but the definition isn’t always so clear. How do you define machine learning and what is it doing to improve security?

It’s a big buzzword right now in security, but it isn’t even a new tool to Trend. Machine learning, to the IEEE Computational Society and tech communities, is clearly defined. The simple result is setting up a computer program that will be able to look at something and make a decision whether it matches a known set of something or not. Over time the model learns and will be able to make judgements based on its learnings.

With that understanding of machine learning, it’s easy to understand why others might consider it to be the be-all end-all solution. Why might it not be enough?

Nothing is perfect. Even the best trained machine learning models are only in the high 90’s for accuracy. You can’t expect there to be a one-trick pony solution for security. When responsible for a customer’s data, you can’t responsibly protect it with one tool. People have made those controls, people make mistakes.

You’ve been hit with ransomware. What are the steps you should take in the first 24 hours?

It depends what level of user you are. Hopefully you’ve take the steps to back up your data and apply basic security controls. While it’s hard to give generic advice, if you have been breached, the easiest thing you can do is disconnect your system from the network, but leave it on. Once you’ve disconnected, you prevent further damage, but if you turn it off, you can actually increase the damage. Then get in touch with an expert, IT help desk, consulting companies, or service providers. But leave it as is! This gives a better chance of recovering your data. This is a nightmare scenario. Ideally it is better to take the preventative measures up front.

Many teams are making the move to a hybrid cloud environment. What can security vendors be doing to help with that move and make the transition easier?

What people need to realize up front, it’s not a spontaneous move, it’s a transition that applies to both security and operations. Look at where you want to be in the cloud and your ideal end state, and the tools needed to make that change and start applying those changes today in your data center. The faster you can get your teams used to those new tools and skill set as you migrate your assets out to the cloud, the smoother transition you will have.

Follow @marknca for your daily dose of cloud security news

DevOps.com sat down with Trend Micro’s Mark Nunnikhoven, VP Cloud Research to discuss hybrid cloud, hybrid cloud security, DevSecOps and more.

Learn about the evolving security needs for data center and cloud, and how Deep Security makes DevSecOps a reality by providing a single pane of glass across physical, virtual and cloud environments.

TRANSCRIPT

Alan Shimel: Hi, this is Alan Shimel, DevOps.com, here for another DevOps chat, and I’m joined today by Mark Nunnikhoven of Trend Micro. Mark, welcome to DevOps Chat.

Nunnikhoven: Thanks for having me, Alan.

Shimel: I hope I didn’t mangle your name too bad, did I?

Nunnikhoven: No, it was perfect. It’s a tricky one. It looks really menacing, but ends up being relatively simple after you’ve heard it a few hundred times, so –

Shimel: Absolutely, very good. So, Mark, I asked you to join us today—I think we had done a webinar and talked a bit offline about, you know, some of the challenges around securing hybrid cloud and, you know, versus obviously people moving stuff to the cloud, people still have stuff in their data center, and what’s good for one may not necessarily be good for the other, but what’s good for both might be an entirely different thing. And I wanted to just spend a little bit of time with you talking about the challenges of securing the hybrid cloud.

Nunnikhoven: Yeah, it’s definitely an interesting topic that I don’t think gets enough attention, so I’m glad we can have this chat.

Shimel: Yep. Before we jump into it, though, Mark, I’d be remiss if I didn’t give you a chance to just let people know, I did mention you’re with Trend Micro, but what is your role there?

Nunnikhoven: Yeah, I’m the vice president in charge of cloud research. So I look at how organizations are tackling security in the cloud, in hybrid environments for, you know, traditional servers and workloads, containers, serverless, sort of anything that’s breaking new ground. It’s a lot of fun and there’s a lot of really interesting stuff happening.

Shimel: Great. So, Mark, you know, I think I started us off by saying there’s people who are making solutions for the cloud, there’s people making solutions for the data center, the ARM RAM. And what’s good for one is not necessarily good for the other, and then, of course, most of the world is living in some sort of hybrid environment.

Nunnikhoven: Yeah.

Shimel: What do we do?

Nunnikhoven: [Laughs] That’s a great question, ’cause, you know, I’ve seen—working with organizations around the world, I’ve seen sort of all of the possible scenarios, I think at least the major ones that people kind of come to thinking might work, and they sort of range from we’re going to force data center processes and controls into the cloud, which is a recipe for disaster. You know, they’re very different environments, to the opposite, of we’re gonna take everything that’s working in the cloud and we’re gonna force it on the data center, to, you know, we’re gonna run two completely different environments in completely different ways, and essentially double the workload. None of those is really satisfactory. Sort of the best route forward tends to be looking at what’s working in the cloud and trying to push a lot of those concepts into the data center. But you still need to make adjustments for the reality that the data center is very, you know, sort of slow and very high level of rigor and very formulaic, and, you know, very well-established. So you do need to make adjustments, but, you know, the goal is to get one way of doing things with allowances for the difference in both environments.

Shimel: Yep. And Mark, as you said, that may be the goal, but, you know, from day one, you obviously—you need a plan to get there. And can you maybe share with our audience a little bit, where do you see the logical steps, if you will, to getting there?

Nunnikhoven: Yeah, and I think that’s a really good way to phrase it, you know, because we can all—we all love the architecture Powerpoints and you go the events and you get real excited about what the eventual design and implementation will look like. But getting there can be really ugly and really messy. I think the biggest thing, and it’s unfortunately the thing that people tend to forget about, is looking at not just the security, but the operations processes and controls that you’re using today, and why you’re using them. And I think that’s the big piece that people miss, is, you know, why am I using something like an intrusion prevention system, or why am I using anti-malware or why do I have this change review board for any sort of operational change I want to make. And once you start to dig into the why you do these things, you realize that the implementation can change, as long as you’re still meeting that why.

So, you know, we put intrusion prevention in place so that we make sure that any time we open up traffic to this server, that we verified it is actually secure web traffic like we’re expecting. Once you know that’s the reason, then you can look and say in my data center, that makes sense to have a big perimeter appliance doing that job, whereas in the cloud, it makes more sense to have something running on the host, doing that same job. So you’re still getting the same answer to why. It’s just the how is different in each environment. So it’s—you’ve got to dig into those, and it starts to clarify what you’re doing.

Shimel: Got it. Mark, what are we seeing—I mean, what—obviously at Trend, you’re probably talking with a lot of customers. Are they coming from the cloud down with their solution? In other words, buying a cloud solution and building—or from the data center up, if you will?

Nunnikhoven: A lot of the time we’re seeing the data center up, simply because it’ll be one team within a larger organization. So we’re talking mainly, you know, traditional enterprise and large governments. And it’ll be one team that’s gone and said we’re gonna push everything and build it new in the cloud, and when they start to kick off that process, they start to get a lot of existing policy and existing culture enforced on them. So they’ll get, you know, specifically the security. They’ll get people saying, well, you need this, you need to make sure that, you know, you go across these boundaries in your design, you’re networking a certain way. So they start to try to take those same designs, which are great in the data center, but horrible in the cloud, and they try to push those into the cloud, and they usually end up failing. And then about six months afterwards realize that they need to take a new approach, because it is a completely different environment.

Shimel: Got it. And, you know what, Mark, I’m a firm believer that at least for the foreseeable future, five, 10 years, hybrid cloud is the—is the dominant, right, the dominant usage pattern that we’ll see from people. Is this yours as well?

Nunnikhoven: Yeah, 100 percent. And for me, it’s simple economics. As much as you want to be in the cloud, if that’s the case, if bought in even, you know, you’ll see it at the major events from the big cloud providers. They’ll pull up big enterprises who are—their CEO or the CIO is, you know, yelling from the top of their lungs that they’re gonna be all in on the cloud. Even in those scenarios, they’re still going to take 12 to 18 months to move out of their data centers.

But the reality is most enterprises have already made a multimillion dollar investment in technology that’s working. It might not be working as fast as they want, but you can’t walk away from that. The cost savings in the cloud, if they’re there, are not that significant that you’re gonna leave millions on the table. So the reality is, you know, for a standard data center life cycle, so like you said, a five to seven year term, you’re gonna be dealing with two environments. And it’s not just that it’s two environments, it’s two environments that are completely different. It’s very much apples and oranges. So you do need a different approach.

Shimel: Got it. And, Mark, I don’t want to put you on the spot, but what Trend Micro specifically, what kind of solutions are you guys, you know, working with customers on?

Nunnikhoven: Yeah, so one of our big push, and we’ll just hit it real brief, because we don’t want it to be a sales pitch at all, it’s just we have a platform called Deep Security, and we built it with the principle of trying to put security as close to the data and application as possible. And because of that, we’ve been able to adapt it for multiple cloud use, for data center use, for the hybrid scenario quite well, because we’ve taken a lot of the principles that work in the cloud and made it so that you can leverage them where it makes sense in the data center. So a lot of automation, a lot of programmability, and a lot of intelligence in the products so you don’t have to worry about the nitty-gritty of dealing with security day to day, unless it’s really urgent and requires you to intervene.

Shimel: Excellent. And, I mean, Trend is obviously not the only one doing this, Mark. When we talk about cloud too, I wanted to ask you this, public versus private clouds, where are you seeing this kind of—you know, are we—you know, to me, a private cloud is—goes hand in hand with hybrid, right? But there are people who are doing public cloud and data center, and that’s hybrid as well.

Nunnikhoven: Yeah, and I find for me, you know, private was a real big push back from a lot of the incumbent data center vendors, and, you know, we’ve kind of gotten out of that hype phase and into the reality of it. And I tend to see what a private cloud is, is somebody who’s taken the data center and adapted a lot of the cloud processes and the ideas behind a public cloud and implemented them within their data center, which can be a really good thing. So you can get on-demand resources, so you just query in API instead of sending out a ticket and getting a bunch of people to do work. You know, getting that accountability, getting that accessibility and that visibility that we’re used to in the public cloud, getting that in the data center.

So I think the ideal hybrid scenario is leveraging private cloud/public cloud. A lot of people aren’t there with their data center, because implementing change takes a long time. There’s a lot of cultural change, a lot of tooling change. So you’ll see them with sort of more the traditional data center approach. But if they want to be successful over the next few years, they need to start pushing more of those private cloud ideas internally, because that will help them have one set of processes and tools across both the public and the private cloud.

Shimel: Makes sense, makes sense. And Mark, what about people, though, who just say, you know what, I’m—I’m biting the bullet, I’m just gonna do cloud, you know, or I’m just gonna do data center. You know, first of all, nothing is forever, obviously, right? You could say that today, but things change. Is the Trend Micro solution, specifically, let’s say, is it one that will work with one, but that scales to the other? Or, in other words, how do we stay out of dead ends?

Nunnikhoven: Yeah, and specifically from us, this is something we’ve been dealing with with our customers over the last few years, because we’ve got customers who are 99 percent in the cloud and 1 percent in a data center somewhere, and the exact opposite. And it really—you know, every culture and enterprise is unique in what their blend and their mix of those environments is. So having the tool be agnostic to that is really important. So, you know, we’ll scale from protecting one system, to protecting 100,000 systems, with the same platform, very, very easily.

And I think there are a lot of projects out there, you know, not just commercial offerings from Trend Micro, but we’re seeing this a lot in the operational monitoring space. So, you know, things like Ryman or Logstaff, which are two great open source projects to help you correlate data, they work equally well in the cloud as they do in the data center, and I think that’s really a big win for people, is that you need to put the assets in the environment that makes the most sense, but you should be able to use modern tools in both. And that’s a real big, important point for people to take away, is that you want to be—even if you’re in a traditional sort of slowing evolving data center environment, you want to be making sure that you’re leveraging some of the great advances we’ve made in security and in operational tools.

Shimel: Excellent, excellent. Mark, we’re coming near the end of our time, as this always seems to go quick. But let’s talk a little bit about the research that you’ve been doing, and just give our audience maybe a little insight. What are you kind of researching now? What do you find really interesting?

Nunnikhoven: Yeah, the biggest area I’m focusing on right now is the rise of what’s been deemed the serverless architecture. So obviously sort of a poor choice of names. There are servers somewhere. But it’s the idea of leveraging a function as a service as something like Microsoft du jour functions or AWS Lambda, where you’ve just got code that you input that runs in a sandbox on someone else’s system and you don’t have to worry about any of that. So scaling and operational tasks are kind of out the door, but these are heavily leveraging SaaS services and you’re sort of picking the best aspects of multiple providers, and stitching it together into one big application. And there’s a lot of really interesting advantages for the business there, because you’re focusing purely on delivering value to your users.

But from a security perspective, now you’ve got multiple parties and systems that you have to trust, that have to work together, that have to share security controls, and then that is normally only one application in a suite of others. So you’ve got serverless applications, you’ve got applications that are running on containers, and then traditional applications. So I’ve been looking at not just the serverless, but how you do you address all of these in a consistent manner, so that you can apply security to the data where it’s being processed.

Shimel: Excellent. Mark, we are butting up against time here, but one last question, and it’s one I frequently ask our guests here on DevOps Chat. If you had to recommend one book to our audience to read, not necessarily the last book you read or anything like that, but one book they should read, what’s your recommendation?

Nunnikhoven: Yeah, that’s always a tough question. I love reading, so I’m always reading three or four books at a time. And, you know, it’s a safe assumption that this audience has already read “The Phoenix Project.” If they haven’t, they should. My latest favorite in this space is called “The Art of Monitoring.” It’s by James Turnbull, and it’s a look at how to implement metrics and measurements around running applications, regardless of where they are. So it talks about logging, it talks about real-time alerting, and it’s a really great approach, in that, you know, it’s always looking at results. So not just collecting data for data’s sake, but it sort of walks you through how all of this stuff should roll up to give your teams some actionable insight on how your application is doing. It’s a really great book.

Shimel: Got it, interesting. Well, Mark, thank you so much for being this episode’s guest, of DevOps Chat. We’d like to maybe have you back on sometime and talk to us about more of the research you’re doing. Continued success to you and Trend Micro, and just thanks again. This is Alan Shimel, for DevOps Chat, and thanks for joining us. Until next time.

All or nothing is a really bad strategy for securing your Microsoft Azure workloads. You need to know exactly what it is that you must secure. But you can’t do it alone. Microsoft provides robust physical security, network infrastructure, and virtualization layer. Ideally, you will match their excellence with equally robust security for your workloads, including operating system, applications, and data.

Diagram

But there’s a small catch. If you try to use traditional security to protect your applications and data in the cloud, you risk slowing your Azure project with needless complexity. There’s a simpler and more effective solution: Security that is built for the cloud is easier to deploy in the cloud and works better to secure the cloud. But even then, the way you deploy it can impact your entire project. You have to be careful not to do anything that makes security stand between you and your success in the cloud.

Bake security into your project from the get go

First, start thinking about security very early in the game. The earlier the better. Baking security into your strategy from the get go is the best way to ensure full coverage when your deployment is complete. That includes thinking about implementing the proper controls, such as who has access to the Azure Management Portal, how designated administrators will access cloud resources, and what you need to do to maintain restrictive network policies.

Monitor traffic in and out of your cloud

Now that’s a great start. But you’ll still need to be sure that your applications remain secure during day-to-day usage. That’s where host-based IPS systems come in. They’ll help you ward off unauthorized access by monitoring incoming traffic to make sure it’s legitimate. Plus, vulnerability shielding functionality will help you keep all workloads up to date. Implementing virtual patches on templates helps you avoid the hassles of patching live workloads.

Detect threats early in the game

But your job is never really done. You still need to find ways to monitor your security posture to uphold continuous integrity of critical system files, application configuration files, and application logs. Sure Azure provides monitoring. But you need every advantage in getting a jump on the attackers. And host-based integrity monitoring will provide an earlier indication of compromised systems.

Be prepared to act fast if you have to

And in the unlikely event that you reveal an attack, you need to be prepared to act quickly to isolate the infected server, identify the cause, and begin repair. Only then can you restore service as quickly as possible. Azure is built to help you improve your incident response. To speed your time to protect, you’ll also want to take advantage of vulnerability assessments and penetration testing to discover as many vulnerabilities as possible before an attacker can use them against you.

Get started now

Want to dive deeper? Get more specifics on the ways you can get the security you need for your Azure project without slowing you down. Trend Micro has created a new white paper that outlines the top 10 security actions you can take to accelerate your application protection within Azure.

The “WHAT”

Disable Monitor Responses from Web Server

The “WHY”

To improve performance on your Web Servers, the ‘Monitor responses from Web Server’ setting may be disabled. When disabled, the DPI engine will not inspect web server response traffic. This would typically result in improved performance, especially for large responses.

Web client requests incoming to the server are still inspected by the DPI engine when this option is unchecked, and DPI rules which protect the web server and web application from malicious attacks are not affected by setting.

The “HOW”

  • Open up any Policy
  • Click on Intrusion Prevention (on the left)
  • Click on “Assign/Unassign…” button
  • From the top dropdown menus select the following options:
    • Web Application Protection
    • All
    • By Application Type

lindsey image 1

  • Find the “Web Server Common” section (I believe it should be second on the list and reference 22 rules)
  • You now have to click on where it says “Web Server Common” (it will highlight all of the rules) à Then right click (again, you must right click where it says “Web Server Common”
  • Select the “Application Type Properties…” option

lindsey image 2

  • Click on the Configuration tab
  • Uncheck the “Default” checkbox
  • Uncheck “Monitor responses from Web Server” checkbox:

Disable Monitor

Congratulations you’ve successfully disabled the monitor responses from Web Server!