Post banner
Cloud Security 7 Min Read

A Deep Dive Into Secrets Management

There’s a lot to think about when it comes to working with containers, Kubernetes, and secrets. You have to employ and communicate best practices around identity and access management in addition to choosing and implementing various tools. Whether you’re a SecOps professional at a startup, small business, or large enterprise, you need to make sure you have the right tools to keep your environments secure.

Recently, we sat down with Stenio Ferreira, Senior Solutions Engineer at HashiCorp. Armed with a degree in computer science and experience as a Java developer at a variety of companies, including IBM, Stenio migrated into a consulting role where he advised clients who wanted to start continuous integration / continuous delivery (CI/CD) pipelines and improve their automation workflow. That’s where he was exposed to HashiCorp, his current company.

According to Stenio, a secrets management solution is a must — and there are various reasons to use one (such as centralized authentication). Stenio explained the services offered at HashiCorp, and shared his perspective on containers, Kubernetes, open source solutions, and Vault.

Q: Stenio, can you give an overview of the products that HashiCorp offers?

HashiCorp has six tools. Each one targets a specific software delivery lifecycle area. So, there’s a tool to be used during development, testing, packaging, provisioning, security, and monitoring.

All of our tools are open source, but four have an Enterprise version, as well. Enterprise is where I focus the majority of my time. These tools represent the four big problems that we try to help companies solve. There’s Vault, which is a tool for security; Terraform, which helps with provisioning infrastructure; Consul, our tool for service discovery and configuration management; and finally, Nomad, which we use for scheduling and orchestration.

Q: Threat Stack is a security company with a deep DevOps practice. In terms of HashiCorp, can you talk about this intersection with regard to Vault?

Vault focuses on the security side of DevOps and it works great with Kubernetes, although Kubernetes is often only one component of cloud infrastructure when we’re talking about security inside an Enterprise.

Vault is a secrets management solution, so it’s a centralized server where the secrets are going to be stored. For an application that’s deployed within Kubernetes that wants to retrieve a secret from Vault, it can leverage the Kubernetes API, and Vault serves as the identity broker. Once Vault’s Kubernetes authentication method (one of several authentication methods) is enabled and configured, containers deployed within a Kubernetes cluster can pass the Kubernetes-issued JWT token in order to authenticate with Vault and retrieve secrets.

Q: Why would someone turn to a tool like Vault? What are the use cases?

Vault is used as a secrets management tool, and addresses three main use cases:

  1. The first use case is to centralize secrets management. Teams want to avoid having secrets sprawled across the infrastructure, so they use Vault to keep their management in one location. Additionally, they want to be able to have an audit log, and enforce access control list (ACL) policies on a least-privilege basis.
  2. The second use case is dynamic secrets. Many teams want to rotate credentials or generate temporary credentials, such as database credentials or IAM. Once a specific task is complete, these credentials are revoked.
  3. The third use case is encryption as a service. Instead of expecting your developers to implement encryption correctly and manage the encryption key’s lifecycle, you can delegate those responsibilities to Vault, and the developers can consume this high-level API so they don’t need to have the encryption logic in their applications.

Q: SMBs aren’t always in a position to pay for Enterprise services. We’re curious if you see folks trying to build off the open source version rather than pay for Vault Enterprise.

That’s a valid question, as many of our most important features are available for free. In our licensing model, we see that the features available in open source should be enough to address the needs of individual contributors. As you move into managing a team, features like disaster recovery and multi-tenancy through namespaces start to become a requirement and are available in Vault Enterprise Pro.

Finally, as you go into enterprise-level needs such as managing multiple teams and requiring regulatory compliance, we have features such as multi-datacenter deployment, HSM (Hardware Security Module) support, and Sentinel, our compliance as code tool, available as Vault Enterprise Premium. These two licenses are also differentiated by the level of customer support offered.

Q: What types of unique Public Key Infrastructure (PKI) architectures are you seeing in the field? For instance, are you seeing an increase in people using Vault but implementing a Transport Layer Security (TLS) protocol strategy for their internal system?

We see the demand from people who want to finally solve the nightmare of managing a PKI certificate lifecycle. They can delegate that responsibility to Vault.

In terms of expanding the use of TLS for everything, that’s when we start going to Consul. Consul recently launched a new feature called Consul Connect. In a nutshell, instead of relying on firewall rules to determine which services have access to each other by controlling ports and IP addresses, you can specify which identities will have access to one another.

Q: Are you seeing an uptick in DevOps groups integrating Vault into their CI/CD processes? Is it becoming a central component of development velocity?

Yes. Generally, this integration is a part of a maturity process. Often, when a client first implements Vault, it is not rolled out to all teams and across all functionalities. The most common base case is that they are using either Chef, Ansible, or some other sort of configuration management tool as part of their CI/CD pipeline.

And more often than one might expect, they have the secrets hard-coded in the config files, including in Chef cookbooks and Ansible playbooks. And they want to have a way of extracting those secrets and centralizing them somewhere. So that’s usually the first use case of integrating Vault in a CI/CD file.

Q: Do you have any tips for people who are making the move from automation tool features like Chef encrypted data bags to HashiCorp Vault?

The main thing is to keep your ultimate goal in mind. Have a clear sense of why you’re going in this direction. It’s not a good idea to do it because everybody else is doing it. Usually these projects have a tendency to take longer than expected, so be aware of your timelines and keep your eyes on your ultimate goal.

Q: So, Vault is obviously becoming a key piece of modern security infrastructure. Based on its audit logs, what are some signs that people are trying to access your secrets? For example, what do erroneous requests look like compared to break-in attempts, or are there patterns to watch for?

One thing to emphasize is that Vault is 100% self-hosted. We would never have access to any clients’ logs.

With Vault, you can enable the audit endpoint, and Vault has two types of logs. There are the traditional operational logs, where you can set the verbosity level, and that’s going to focus on any errors in memory. And then there’s the access log, which is usually the most critical one because it’s used to identify any penetration that might be seen across the system.

The access log is going to capture pretty much everything about the request. There’s a block about the authentication, so every request that has to be made with a client’s token contains information about this particular token. For example, what are the ACL policies associated and when it was issued? What is the identity associated? Then, the log has a block talking about the request itself. Where is this request coming from? What is the path that it’s trying to reach?

Traditionally, our customers will have a log-collecting agent and ship those logs to a centralized logging solution, such as Splunk or Datadog, where they can combine visibility with other logs.

Q: Containers have obviously become a big thing. Are you seeing any challenges with secrets accessed by containers that don’t apply to more traditional virtual servers?

There are different challenges. Vault makes it easier when you have applications that are running inside Kubernetes to talk to Vault. Other than that, there are different patterns you can have to inject those secrets or to make those secrets available for your containers. There are a few different approaches:

  • One approach is for the applications to rely on the REST API. Every service within Vault is available as a REST endpoint, and it would just be a matter of the application deployed within the container to retrieve the secrets by calling an API endpoint which has previously been given ACL permissions. Vault will validate this call using the client token, which can be retrieved by the container using one of the authentication methods, such as Kubernetes authentication or JWT token.
  • The other approach is to use init and sidecar containers. When you are deploying a container inside Kubernetes, you are able to specify a set of scripts or logic that runs before the container itself. As an analogy for those not familiar with Kubernetes, it is the same workflow as deploying an EC2 instance in AWS, and relying on a user-data script to initialize the instance. For Vault, this can be used for initial authentication, retrieving the client token, and subsequently retrieving any secrets that the application might need. You could then have a sidecar container that would manage the client token and any dynamic secret’s lifecycle, making sure that they are kept in a valid state.

Q: Are there any general trends that companies are struggling with when it comes to containers?

There are different points of view depending on the organization. Some organizations have a feeling that they should go to Kubernetes, but they are not able to clearly articulate why. They feel the market is using Kubernetes, so they should, too.

Personally, I haven’t seen a company that has a true Kubernetes production. This is because I deal mostly with Fortune 500 companies, which are often late adopters. I’m sure if you go to startup companies or more digital-based companies, they’re going to be using Kubernetes in production.

At HashiCorp, all of our tools are technology agnostic. For example, Vault secrets management can work if you’re using Kubernetes, or if you’re using Docker, Swarm, or other solutions. Even Nomad, our schedule orchestrator that might compete with Kubernetes, can actually work alongside it.

More About Stenio Ferreira

Stenio Ferreira is a Senior Solutions Engineer at HashiCorp, where he helps clients implement security solutions. He is a full stack developer and DevOps professional with sales, marketing, and entrepreneurial experience. Stenio’s goal is to bring clarity to complex issues to help achieve impactful results.