Why follow best practices?
Understanding and following best practices as well as building in the cloud on Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform™, Kubernetes, containers, and applications will enable you to get the most out of your toolkit. This includes more security as you are building, more proficiency with the tools and services you are using, better structure, faster environment, a reliable system that will withstand outages, and a more cost-effective solution.
Examples of Best Practices
Every product or service being used has a bad, ok, good, and best way to be used. Let’s look at some common examples:
Commenting in Code
For instance, when learning how to write code, no matter which language, the first thing you learn is how to comment in code not commenting is bad because you or other teammates might not understand the code you wrote, leading to more confusion and slowing down workflows. The best practice would be folding your comments or writing comments so that you and anyone else can understand what the comments are and what the code is meant to accomplish.
Tagging resources
Not using tags or labels with your resources in the cloud is not the best way of building. Tags can let you know who owns the resources and what project it is a part of, enabling you to know the purpose of each resource. Also, tags help you to find resources and they are often used by cybersecurity platforms for organisation purposes and exceptions. A comprehensive tagging process can help security and development teams see the entire software supply chain, making it easier to identify and limit the scope of attacks. Therefore, having a set tagging process such as a resource owner and project association, will be the best way to leverage tags.
Access control
Allowing everyone access to your environment or resources is a bad practice for obvious reasons. You can add an access control list. However, the best practice is to use multiple layers, such as an access control list, security groups, and access control policies to add access control layers of protection.
Encryption
Encryption is a security best practice. This will protect your data from being publicly readable at rest or in transit.
Think of encryption like this: you are delivering an important message from Barbados to Texas. To keep it confidential, you write it in a special code that only the two of you understand. To further protect your special message, you put it in a locked container and only your friend has the key to get in. This way both your message and the container are protected. This is like using encryption at rest and in transit. Doing nothing is bad. Doing one or the other is better. Two levels of protection are the best practice.
Use Cases
As mentioned earlier there are many different use cases or scenarios. Designing can take a bit of strategic planning and you have to do a bit of a balancing act to find the perfect fit for your goals. At times, for greater performance, you have to use a less restrictive security posture, or you have to spend a more money to achieve the desired goal. These three different use case diagrams will show some common uses, structures and highlight the best practices in these scenarios to help DevOps teams build secure apps.
Use Case 1: Basic Web App
Cloud service providers like AWS, Azure, and GCP all have an Architecture Framework that defines best practices for using their services.
For example, AWS has the AWS Well-Architected Framework which is composed of six pillars: Reliability, Cost Optimization, Performance Efficiency, Operational Excellence, Security, and Sustainability. Referencing the diagram below, we will explore how aligning your architecture with design pillars can help you build better.
Reliability
In the diagram above, you can see there are two availability zones with a database as well as Amazon EC2s (or virtual machines) in each zone. This creates reliability by having a backup of computing power as well as a secondary database. Should one zone fail, then the other can take over.
Operational Excellence
Here you have auto-scaling, which means it will spin up and down Amazon Elastic Compute Cloud (EC2) compute power as needed. This helps to stop downtime, lagging, and not enough power needed to perform the tasks.
Cost Optimization
As I previously mentioned, the auto-scaling will not only scale up but will scale down as needed. This can be used to save money as well as making sure there aren’t any unutilised Amazon EC2s that are still costing you money. If you aren’t using the Amazon EC2, the auto-scaling will get rid of it and then bring it back when needed without charging you.
Performance Efficiency
Amazon Elastic Load Balancing (ELB) distributes an even amount of capacity across the Amazon EC2 instances. Using a service like Amazon CloudFront will help to decrease the retrieval time in getting content from your Amazon Simple Storage Service (S3) buckets.
Security
Security is obviously a big part of using best practices. In most architectures, you have storage like the Amazon S3 bucket in this diagram. One of the most common configurations found in storage is that it is publicly open to everyone. This is the worst practice but often needed. So, what can you do to safeguard your storage?
One option is to allow CloudFront to pull and write the data instead of the file storage, so the file storage can still be private. Another option is using a cybersecurity platform to scan the files that are being uploaded. Doing one or the other is a good practice. If it’s possible to do both, that would be a best practice.
Consider these questions when building: Are you preventing unwanted access in your network security? This is when the layers of access control come into play. Are you controlling the ports where traffic is allowed to flow in and out? Another layer of protection would be using a third-party intrusion prevention or detection system (IPS/IDS) to block malicious traffic.
Do you have the protection need for the Amazon EC2 instances in the diagram? Is access to the Amazon EC2 being restricted to those who need it? This can be accomplished by limiting your team’s access to the specified security groups. Are you using a cybersecurity platform to provide anti-malware, IDS/IPS firewall, log inspection, and an application control?
Sustainability
The new Sustainability pillar was recently added to help users reduce their workload’s energy consumption and improve their efficiency. Consider these questions:
Do you have any Amazon EC2 instances that appear to be overutilized and need to be upgraded for better workload handling and response time? On the other hand, are there any underutilised Amazon EC2 instances that need to be downsized to lower the cost of your monthly AWS bill?
Do you have large number of Amazon EC2 security groups that may be unnecessary or obsolete? Do the Amazon EC2 instances in your AWS account have the desired instance types based on the workload deployed?
Use Case 2: Serverless Application
This use case discusses the serverless application not shown in the first diagram and how security best practices can be applied.
There is an OWASP top 10 for serverless applications built with services such as AWS Lambda. This list includes vulnerabilities like SQL injection, broken authentication, sensitive data exposure, and so on, which helps security and development teams stay ahead of the bad guys.
DevOps teams should leverage third-party application security tools that block the vulnerabilities listed on the OWASP 10. It’s best to adhere to the principle of least privilege by only granting access and permissions for what is absolutely needed and nothing more.
Use Case 3: Container Security
In this use case, we will focus on container security best practices. Common security problems for containers include images created inaccurately, Docker files revealing important data, unreliable resource usage, and outdated images.
It isn’t always easy to avoid some of those problems like unknowingly using someone’s carelessly built container image you found on GitHub or other open-source code. To prevent this from happening, you can use a cybersecurity platform scan and monitor open source code as well as its dependencies and libraries. There are cybersecurity platforms that can complete pre-runtime scans, integrate into your pipeline, and provide container security during runtime. Some additional best practices include regular CI/CD testing and deployment and keeping your images small.
Start Best Practices Left
Being able to start left with security best practices before deploying anything will allow you to detect and respond to issues while it is still code. This dramatically reduces the chances of a security disaster once deployed. The best way to start left is by utilising a platform with capabilities to scan your IaC templates before it is committed to check that it is that it is following best practices from the AWS Well-Architected Framework with multi-availability zones, backup EC2s and databases, closed buckets, layered access control, and encryption of data.
Next Steps
You may have noticed we used the word “platform” throughout. Yes, there are a lot of point products that can help you stay secure, but when you’re building in the cloud, you need total visibility of attack surface. Disconnected security tools hamper your ability to collect and correlate threat and activity data, often resulting in a lot of false positives.
A platform with broad third-party integration that deploys within your existing ecosystem is ideal. You now have comprehensive visibility across your entire IT infrastructure, resulting in less manual correlation and false positives as well as more agile development.