Introducing containers into cloud infrastructure can lead to faster development cycles as well as more efficient use of infrastructure resources. With these kinds of competitive advantages, it’s no wonder why container orchestration platforms like Kubernetes are so popular. In fact, Gartner estimates that 50 percent of companies will use container technology by 2020 — up from less than 20 percent in 2017.
While the value and popularity of containers are undeniable, deployments have opened up a whole new set of infrastructure security concerns for Development and Operations teams. This is why more and more companies are focusing on container security to ensure that they don’t ship software with known vulnerabilities, to protect sensitive data, and to maintain compliance with industry-specific regulations such as HIPAA, PCI, or SOC 2. Resources like the Center For Internet Security (CIS) benchmark reports on Kubernetes or Docker provide comprehensive, objective guidelines for organizations transitioning to containers.
In this post, we’ll walk through some of the top questions you need to ask when thinking about establishing security and maintaining regulatory compliance in a container infrastructure environment.
Question 1: Are the Right Access Controls in Place?
In theory, cybersecurity best practices around account security and the use of two factor/multi-factor authentication do not change in containerized infrastructures. In practice, containers are co-located on the same host, and the surface area of a compromised container can have greater consequences.
As companies grow, there is greater liability associated with the absence of granular permissions and short-term credentials. Role-based access control (RBAC) becomes necessary for managing both employee and machine access to secrets/tokens. In general, the more a company grows, the greater this tech debt becomes, and the more difficult it is to introduce effective access controls.
Companies starting out on their security journey can use git secrets (https://github.com/awslabs/git-secrets) to prevent committing secrets to a public or private repository. A billing alarm is also a crude but often effective tool for detecting compromised accounts, though it does not cover everything. For companies that are further along, the use of a bastion host, AWS parameter store, and/or AWS secret store can be an effective long-term solution. However, for security minded companies, there are few options more flexible than Hashicorp Vault. Companies that decide to use Vault must balance the operational complexity against the business benefits.
Question 2: Are Containers Configured Properly?
Since containers are a relatively new technology, the risk of misconfiguration runs high because of inexperience.
In Linux jargon, containers are nothing more than cgroups and namespaces. In practice, they are a powerful abstraction towards immutable infrastructure. For the uninitiated, configuring a container can be a daunting task. Let’s say you want to run a containerized third-party application. Never mind how on earth you decide to get a container orchestrator like Kubernetes running. What do you do?
- Provision an AWS EC2 instance and install Docker.
- Inject an API key using environment variables, and use “Docker run” to get up and running.
The application runs happily for a few days and then misfortune strikes:
- The container exits, and you realize that it’s extremely difficult to integrate Docker with a service manager like RunIt or SystemD.
- You realize that pulling from DockerHub is unreliable and ponder the complexity of running your own Docker image registry.
- You realize that every container can talk to every other container by default, and you ponder the security implications.
- You find an issue and shell into a container to debug. No tools are installed because you used Alpine Linux.
All of these problems have solutions, but they take a great deal of time and resources to implement properly. If you make the decision to adopt containers, you will need to create an implementation and operating strategy that takes into account the resources and level of effort that will be required to develop the processes to effectively address these issues.
Question 3: Is the Proper Security Orchestration in Place?
The rule of thumb is that containers are fantastic for local development, but the road to production will be riddled with surprises. Development teams often sacrifice security to iterate more quickly. Misaligned incentives where development teams are incentivized to produce features and security teams are incentivized to mitigate risk can lead to deadlocked projects.
The solution is to integrate security practices earlier in the development processes. This can be done with tools such as CoreOS’s Clair, which will scan Docker images for vulnerabilities. This can be integrated into the CI/CD pipeline. Developers can write their own firewall rules with the help of Operations. Even simply documenting known vulnerabilities will earn companies an above average mark on their security report card.
Another tool that can increase visibility between operations and security is an effective security orchestration tool (SOAR). SOARs offer detection to generate alerts, alert management to store alerts from various sources, and finally, analytics to drive your decision making.
You may decide to combine SOAR and Security Information and Event Management (SIEM) technology to achieve this type of security orchestration. SIEM technology can centralize your log data and security events, while SOAR technology “pushes and pulls” data from different sources to alert users in real time when there is a potential threat.
(To learn how Threat Stack is using its own internally built SOAR in combination with Graylog’s log management solution, listen to this recent webinar.)
Question 4: Do the Infrastructure Solutions You’re Using Keep Your Industry’s Compliance Standards in Mind?
It’s important to keep your industry’s compliance standards in mind as you adopt new solutions like containers. For example, HIPAA regulations require that personal health information (PHI) be encrypted in transit or at rest when accessed by containers. While there is no certification recognized by the US HHS for HIPAA compliance, Microsoft, Google, and Amazon have regular independent audits to ensure that they can meet HIPAA criteria and offer capabilities such as encryption — meaning that their users who must be HIPAA compliant can rest easy. The shared responsibility model makes certain AWS services compliant; however, this does not mean that companies that rely on these services will be compliant automatically. Do your research and ensure that your container solution of choice provides the functionality you need to stay compliant with your industry standards.
A Few Last Words . . .
Companies are eager to embrace solutions like containers because they help drive deployment velocity, rapid development, operational efficiency, and innovation. But that progress mustn’t come at the cost of security and compliance.
By developing a strategic adoption and operating plan based on key issues like those outlined above, your company can adopt this cutting-edge technology — without comprising efficient operations, strong security, or compliance.