The process of securing containers is continuous. It should be integrated into your development process, automated to remove the number of manual touch points, and extended into the maintenance and operation of the underlying infrastructure. This means protecting your build pipeline container images and runtime host, platform, and application layers. Implementing security as part of the continuous delivery life cycle means your business will mitigate risk and reduce vulnerabilities across an ever-growing attack surface.
When securing containers, the main concerns are:
- The security of the container host
- Container network traffic
- The security of your application within the container
- Malicious behaviour within your application
- Securing your container management stack
- The foundation layers of your application
- The integrity of the build pipeline
The goal of cybersecurity is to ensure that whatever you build continuously works as intended, and only as intended.
Get to know some of the names businesses are using for container needs: Docker®, Kubernetes®, Amazon Web Services™ (AWS), and Microsoft®.
Before you start securing your containers, you need to know the key players in the space. Docker, a leader in the containerisation market, provides a container platform to build, manage, and secure applications. Docker enables customers to deploy traditional applications and the latest microservices anywhere. Like with any other container platform, you need to ensure you have proper protection. Learn more about Docker container security.
Kubernetes is the next big name to get to know. Kubernetes provides a portable, extensible, open source platform for handling containerised workloads and services. While Kubernetes offers security features, you need a dedicated security solution that will keep you secure; there has been an increase in attacks on Kubernetes clusters. Learn more about securing Kubernetes.
Amazon Web Services and container security
Next up, we have Amazon Web Services (AWS). AWS understands the need for containers to empower developers to deliver applications faster and more consistently. That is why they offer Amazon Elastic Container Service (Amazon ECS), a scalable, high-performing container orchestration service that supports Docker containers. It removes the dependencies on managing your own virtual machines and container environment and allows you to run and scale AWS containerised applications with ease. However, like the rest of the key players above, you need security to gain the full benefits of this service. Learn more about AWS container security.
Securing Microsoft Azure Container Instances
Last, but not least, we have Microsoft® Azure™ Container Instances (ACI). This solution empowers developers to deploy containers on the Microsoft® Azure™ Public Cloud without the need to run or manage an underlying infrastructure. You can simply spin up a new container using the Microsoft® Azure™ portal, where Microsoft then automatically provisions and scales the underlying computer resources. Azure Container Instances allows for great speed and agility, but needs to be secured to properly reap all of the benefits.
Now that you know the major players, let’s get into how to secure them, or dive into the links above for specifics on securing each solution. Learn more about Securing Microsoft Azure Container Instances.
Securing the host starts with selecting its operating system. Whenever possible, you should use a distributed operating system that is optimised to run containers. If you’re using stock Linux® distributions or Microsoft® Windows®, you’ll want to make sure that you disable or remove unnecessary services and harden the operating system in general. Then, add a layer of security and monitoring tools to ensure that your host is running as you would expect. Tools like application control or an intrusion prevention system (IPS) are very useful in this situation.
Once your container is running in production, it will need to interact with other containers and resources. This internal traffic must be monitored and secured by ensuring all network traffic from your containers passes through an IPS. This changes how you deploy the security control. Instead of implementing a small number of very large traditional IPS engines on the perimeter, you would implement the IPS on every host, which allows for all traffic to be effectively monitored without significantly impacting performance.
Securing the application in the container
Once your container is running in production, it is constantly processing data for your application, generating log files, caching files, etc. Security controls can help ensure that these are ordinary activities and not malicious. The real-time anti-malware controls running on the content in the container are critical to success.
An IPS plays a role here as well, in a usage pattern called virtual patching. If a vulnerability is exposed remotely, the IPS engine can detect attempts to exploit it and drop packets to protect your application. This buys you the time needed to address the root cause in the next version of that container instead of pushing out an emergency fix.
Monitoring your application
When deploying your application into a container, a runtime application self-protection (RASP) security control can help. These security controls run within your application code and often intercept or hook key calls within your code. Besides security features like Structured Query Language (SQL) monitoring, dependencies checking and remediation, URL verification, and other controls, RASP can also solve one of the biggest challenges in security: root cause identification.
By being positioned within the application code, these security controls can help connect the dots between a security issue and the line of code that created it. That level of awareness is difficult to compete with and creates a huge boost in your security posture.
Securing your container management stack
From a security perspective, the management stack helping to coordinate your containers is often overlooked. Any organisation that is serious about its container deployment will inevitably end up with two critical pieces of infrastructure to help manage the process: a privacy container registry like Amazon ECS and Kubernetes to help orchestrate container deployment.
The combination of a container registry and Kubernetes allows you to automatically enforce a set of quality and security standards for your containers before – and during – the redeployment into your environment.
Registries simplify sharing containers and help teams build on each other’s work. However, to ensure that each container meets your development and security baselines, you need an automated scanner. Scanning each container for known vulnerabilities, malware, and any exposed secrets before it is made available in the registry helps to reduce issues downstream.
Additionally, you’ll want to make sure the registry is well protected. It should be run on a hardened system or a very reputable cloud service. Even in the service scenario, you need to understand the shared responsibility model and implement a strong role-based approach to accessing the registry.
On the orchestration side, once Kubernetes is running and deployed within your environment, it offers a significant number of advantages that help ensure that your teams get the most out of your environment. Kubernetes also provides the ability to implement a number of operational and security controls, such as Pod (cluster level resources) and network security policies, allowing you to enforce various options to meet your risk tolerance.
Building your application on a secure foundation: container scanning
You need a container image scanning workflow in place to ensure that the containers you used as building blocks are reliable and secure against common threats. This class of tools will scan the contents of a container, looking for issues before they are used as a building block for your application. It will also perform a final set of checks before a container is deployed to production.
When properly implemented, scanning becomes a natural part of your coding process. It’s a fully automated process that can quickly and easily identify any issues made as you develop your application and its containers.
Ensuring the integrity of the build pipeline
Attackers have started to shift their attacks towards earlier stages of your continuous integration/continuous delivery (CI/CD) pipeline. If an attacker successfully compromises your build server, code repository, or developer workstations, they can reside in your environment for significantly longer. You need a strong set of security controls that are kept up to date.
Implement a strong access control strategy throughout the pipeline, starting at your code repository and branching strategy, extending all the way to the container repository. You need to ensure that you implement the principle of least privilege – only providing as much access as needed to accomplish the required tasks – and audit that access regularly.
Securing your containers requires a comprehensive approach to security. You must ensure that you’re addressing the needs of all teams within your organisation. Make sure your approach can be automated to fit your DevOps processes, and that you can meet deadlines and deliver applications quickly while protecting each group. Security can no longer be left out or show up at the last minute with demands to change your workflow. Building trusted security controls and automated processes from the start addresses security concerns and makes it easier to bridge the gap between teams.