Post banner
Cloud Security 16 Min Read

Cloud Security Best Practices: Finding, Securing, & Managing Secrets, Part 2

In Part 1 of this post we explained how you can find all the secrets in your environment. In Part 2 we will discuss effective ways to store and manage secrets — to keep them from leaking to unauthorized people.


Before we dive in, however, take a look at four factors that will have an influence on the method you finally choose:

  • Time: You don’t have weeks or months to implement a solution. As we pointed out in an earlier post, timelines that are this long are neither advisable nor necessary where security issues are concerned.
  • Cost: You will want a reasonably priced solution — especially if you are a smaller organization.
  • Complexity: You want to avoid a system that requires a lot of operational or administrative overhead.
  • Risk of Failure: You also want to avoid a system whose failure would trigger the failure of other systems (or at least you want to limit the blast radius from failure).

Right away, you can eliminate systems such as HashiCorp Vault or Keywhiz, which are purpose-built for managing and distributing secrets but are cumbersome to setup and administer. You can also eliminate general purpose service configuration systems such as ZooKeeper, Consul, or etcd. While they are easier to setup than the first two, they have less functionality and are, therefore, less secure. And if budget is a concern, each of these solutions will be more expensive than the low- to no-cost solutions discussed below.

So what’s left?

Keep reading . . .

Methods of Storing Secrets

If the solutions listed above are too expensive, too time-consuming to implement, and/or too complex to administer and operate, think about going for smaller and more immediately attainable results. Remember: Security can be implemented in small, iterative steps rather than large jumps, and it is something you can continuously improve over time.

With that in mind, the sections that follow explore three cost-effective, easy-to-implement, and operationally effective methods of storing secrets:

  • git-crypt
  • Configuration Management
  • AWS S3

Each has its own strengths and weaknesses that may or may not be an issue for you to consider in your environment.


git-crypt is probably the most environment-agnostic means of securing secrets. It does not drastically alter the way secrets are managed, and the number of new tools required and the amount of management overhead are low. git-crypt is designed to provide transparent file encryption and decryption within a git repository. Files are encrypted when they are committed and are decrypted when checked out. Additionally, git-crypt still works if a person does not have the ability to decrypt a file. This allows you to keep secrets in the repository but not expose the unencrypted secrets to people with access to the repository (for example, your developers — or the person who found a developer’s laptop). git-crypt is available as a package for many Linux distributions and for Mac OS via Homebrew.


You have found a secret in our code, so now you need to:

  • Initialize git-crypt for the repository
  • Place the secret into a file
  • Encrypt the file
  • Pass the secret to the app on startup
  1. Start by initializing the repository with a symmetric key. You can use GPG, which allows you to selectively share the repository secrets with multiple people. But that assumes your organization is widely using GPG. We will assume that, if your code has secrets hardcoded in it, you don’t have GPG in use widely. Or, if you do, your security priorities have been greatly misplaced.

    At this point, ask yourself whether you want a single key per environment (key for production, key for development, etc.) or a key per repository per environment (each repository has its own production key, development key, etc.). Keys per repository means, in theory, that compromising one key doesn’t reveal all the secrets. However, this creates the need for more management overhead. You need to safely store the keys for every repository, make sure they can all be distributed to the correct hosts, and repeat that every time a new service is added to the environment. So be realistic. How many secrets are you keeping safe? How many secrets overlap multiple services? The same database password might be used by multiple services. If that’s the case, then how much safer are you, really, after increasing your management overhead? Use your best judgement here after examining what you are really protecting. In the example below, you will generate a key per environment for a repository.

  2. At this point, you are going to start encrypting a secret for your development environment:

    [[email protected]:aws-straycat threatstack-to-s3]$ git crypt init -k dev
    Generating key…
  3. Create a .gitattributes file if one doesn’t already exist so you can designate files that should be encrypted. You will create files in the pattern env..secret (e.g., for present purposes. Therefore, your .gitattributes file will need a pattern like this: filter=git-crypt-dev diff=git-crypt-dev
  4. Create a secrets file for the development environment and see whether git-crypt sees it as a file to be encrypted:

    [[email protected]:aws-straycat threatstack-to-s3]$ touch
    [[email protected]:aws-straycat threatstack-to-s3]$ git crypt status
    not encrypted: .gitattributes
    not encrypted: .gitignore
    not encrypted:
    not encrypted: app/
    not encrypted: app/models/
    not encrypted: app/models/
    not encrypted: app/models/
    not encrypted: app/views/
    not encrypted: app/views/
    not encrypted:
    not encrypted: requirements.osx.txt
    not encrypted: requirements.txt
    not encrypted:
  5. Move a secret out of your code and into a secret file. The offending block of code looks like this:

    Communicate with Threat Stack
    import os
    import requests
    THREATSTACK_API_KEY = '6hVZ07n9V2vv21saoJTkZNiJRDdVG0OBAqRRTm8323xswyFODhqhdiwanZVorK6jkl1aMci5'
  6. Move the value of THREATSTACK_API_KEY to an environment variable. (We will cover the downsides of this approach later.) That will involve removing the API key from the code, placing it into the secrets file you just created, and sourcing that file before starting the service:

    Communicate with Threat Stack
    import os
    import requests
  7. Source the secret file and start the application:

    [[email protected]:aws-straycat threatstack-to-s3]$ source
    [[email protected]:aws-straycat threatstack-to-s3]$ python
    == Running in debug mode ==
     * Running on http://localhost:8080/ (Press CTRL+C to quit)
     * Restarting with stat
    == Running in debug mode ==
     * Debugger is active!
     * Debugger pin code: 795-844-052
  8. Commit the code and push it to your central repository. If you view the secret file you just created on GitHub, you will find that it has been encrypted.

  9. Now that the application works the way you want it to, export the dev key from the repository. Make sure this file is stored in a safe place so it doesn’t get lost. If you have 1Password or LastPass, that’s a good start. You want a master copy of this key so it can be used elsewhere in the environment.

    [[email protected]:aws-straycat threatstack-to-s3]$ git crypt export-key -k dev ~/
  10. Now let’s show how you can share this repository and access to the secret file. Start by cloning the repository and then cat to see that it is encrypted:

    [[email protected]:aws-straycat tmp]$ git clone [email protected]:threatstack/threatstack-to-s3.git
    Cloning into 'threatstack-to-s3'...
    remote: Counting objects: 89, done.
    remote: Compressing objects: 100% (10/10), done.
    remote: Total 89 (delta 1), reused 0 (delta 0), pack-reused 79
    Receiving objects: 100% (89/89), 14.05 KiB | 0 bytes/s, done.
    Resolving deltas: 100% (31/31), done.
    [[email protected]:aws-straycat tmp]$ cd threatstack-to-s3/
    [[email protected]:aws-straycat threatstack-to-s3(git-crypt)]$ git crypt status
    not encrypted: .gitattributes
    not encrypted: .gitignore
    not encrypted:
    not encrypted: app/
    not encrypted: app/models/
    not encrypted: app/models/
    not encrypted: app/models/
    not encrypted: app/views/
    not encrypted: app/views/
    not encrypted:
    not encrypted: requirements.osx.txt
    not encrypted: requirements.txt
    not encrypted:
    (threatstack-to-s3) [[email protected]:aws-straycat threatstack-to-s3]$ cat
    GITCRYPT_A(kM33'9/.~,2<CgBp;5t틀q$ZiKTi+9Xrdak#([[email protected]:aws-straycat threatstack-to-s3]$
  11. Retrieve the exported key for the dev environment from wherever it has been safely stored, and with that file, decrypt

    [[email protected]:aws-straycat threatstack-to-s3]$ git crypt unlock ~/
    [[email protected]:aws-straycat threatstack-to-s3]$ cat

    The unlock command will also store the key in the repo so it can be altered and re-encrypted on a subsequent commit.


At this point, you need to deploy the application and its secret discussed above. Specific guidelines for deployment vary because everyone’s deployment method is different. Briefly, here are a few options.

git-crypt requires a Git repository for it to work. That means making a crucial decision: Are you comfortable shipping your project’s Git history on deploy? If you are not, then the decryption key needs to be made available to your hosts that are part of your build and deployment pipeline (e.g., Jenkins build hosts). You can store the unencrypted secret in your build artifact that is deployed. This then requires you to ensure that your artifact repository has limited access.

You should not be shocked by this or immediately dismiss this approach because your artifact repository is an attack surface. It already was before you started this work! When you run into situations like this, ask yourself whether this is a new attack surface or simply new to you: one that already existed and that you have only just noticed. Are you better off than you were before?

If you are okay with distributing the project’s git repository (more likely you are not, but have accepted that this is the way things are going to be), then you can distribute the decryption key to your instances using a tool like cloud-init. If you already have configuration management but chose not to use it for application secrets (e.g., your infrastructure operates as a Heroku style PaaS, and CM manages compute but not app configuration), you might still deploy your decryption key that way. If you do decide to go this route, make sure the .git directory cannot be viewed publicly! Review your webserver’s documentation for information on how to deny access to directories, and make sure to protect from this trivial but too common configuration issue.


Your goal was to prevent the leaking of secrets, so let’s see what you accomplished:

  • You took a number of steps to help prevent the accidental exposure of secrets to the outside via a misconfigured GitHub repo or an accidental push to the wrong repository by a developer.
  • Potentially, you reduced the exposure of secrets to employees who don’t need access if you are using different keys for each repository and you are not distributing the keys to developers. But if that was a goal, you would also need to have access controls on your artifact repository and ensure that only authorized users can access the hosts that a service is running on. If all these controls are in place, then great. But if they are not, ask yourself whether it’s worthwhile addressing insider threats now or at a later time.
  • You also made leakage through laptop loss or theft a little more difficult in many cases. If developers aren’t given the keys, then someone who has gained access to their laptop won’t have access to the app’s secrets. (But they still have a wealth of access to your environment, so don’t rely on this approach to prevent lost or stolen company property from leaking your secrets. Instead, implement full disk encryption and ensure that laptops lock automatically.)
  • Finally, you introduced a new attack vector — application environment leakage. Many application stacks will dump the application’s environmental variables on a crash. The crash may even be returned to the user. So know how your applications behave on crash and whether they need to be reconfigured to prevent external leakage. Also, secrets might leak internally if the exception is logged and stored in your logging facility. More advanced attacks, like ImageTragick, might attempt to exploit vulnerabilities that allow remote code execution to obtain the current running environment. A safer method than using environmental variables would be the use of a configuration file.

All these issues aside, you are still better off than you were before from both a security and an operations standpoint. However, you may want to go further in managing secrets better. That’s where additional methods like using configuration management or AWS S3 buckets come in.

Configuration Management

Configuration management is one of the main ways people manage secrets in their environment. The major configuration management tools like Puppet, Chef, and Ansible all have their own ways of handling secrets. If your configuration management system already manages portions of your applications, then it is an ideal starting point for storing your secrets.

So let’s take a look at Puppet, Chef, and Ansible to see what their relative merits are.


Puppet’s Hiera tool is a hierarchical key/value store for storing infrastructure data. Its purpose is to separate the information in your environment from the logic that configures your environment. Hiera can use a variety of backends to retrieve data from. This lets you continue to use the same pattern for distributing secrets even as you change backends to handle increased complexity. (Example: Although we ruled out the use of Vault for the time being, you can use it as a backend for Hiera down the road, and the change will be transparent to the application.) Hiera’s standard backend is YAML formatted text files. For storing secrets in Hiera, one of the most popular tools is called hiera-eyaml, which stores encrypted values in those same YAML files.

If you are using Puppet but are not already using Hiera, you should start by becoming more familiar with Hiera. In addition, you should probably factor learning and implementing Hiera into how long it will take to secure your secrets. Implementing Hiera properly takes time and understanding to set up the lookup hierarchy properly. Your desire to secure secrets using Hiera might result in tech debt down the road as you attempt to refactor your setup.

For an overview on Hiera, click here:

If you are ready to adopt Hiera, hiera-eyaml is the backend you’re looking to use as a start. For details, click here:

This gives you a tool for providing asymmetric encrypted values using PKCS#7. After you create a public and private key pair, store the private key securely and provide the public key for adding secrets. A person will only need to have the Puppet repository, public key, and hiera-eyaml installed in order to add new secrets. The README in the repository goes into depth on creating and managing secrets.

Distributing the private key is the next major hurdle. You need to distribute the key to a Puppetmaster host. There are several ways of handling this, and each has its downsides:

  • Deploy via Puppet:
    • Requires one Puppetmaster to be setup properly already.
    • May make rolling the key difficult.
  • Deploy via cloud-init:
    • Requires new Puppetmasters for new keys.
    • Requires one of the following:
      • Managing the key contents in your infrastructure tooling repo (e.g., Terraform, which may not be encrypted)
      • Managing the key contents in S3
  • Deploy via S3
    • Requires bucket management to make sure only Puppetmasters have access. This probably means IAM instance profile management.
    • Requires a procedure for managing the contents of the bucket.


Ansible has a feature called vault (not to be confused with HashiCorp Vault) that can be used to encrypt a playbook file containing variable definitions. Unlike Puppet, Ansible already has the concept of separating data from the logic built into it, so little planning is needed to make sure you have integrated it properly into your Ansible roles. Rather than planning a hierarchical lookup strategy, you can get started quickly by creating a new file in a role and encrypting it with ansible-vault. Ansible vault’s documentation is located here:

Unlike Puppet’s Hiera, the entire file is encrypted. For that reason, you will probably want to have a secrets.yml in addition to the main.yml under vars/ for the secret values in your Ansible role. A particular role may look like this:

├── tasks
│   └── main.yml
└── vars
└── main.yml

└── secrets.yml

To use the variables defined in vars/secrets.yml, you would then include the file in tasks/main.yml at the top of the file as follows:

- include_vars: secrets.yml

Managing access to secrets in the role is extremely simple, but this is also one of the downsides. The Ansible vault uses a single password for access. That means everyone who needs access to the vault, whether it’s to manipulate values or to apply the given role, needs access to the vault secret. Managing access to the vault secret will take the most thought. If engineers are running playbooks manually, then anyone in engineering might need access to it. If the password is sophisticated enough to not be easily memorizable, then people may resort to placing the secret in a plain text file and passing the –vault-password-file option to Ansible. This, of course, is an issue. The secrets and the key to decrypt them will be stored in the same place.

There are a few different use patterns for Ansible, so let’s address how it might be used as well as the challenges of vault:

  • Where is Ansible run from?
    • Developer laptops?
      • If Ansible is run from developer laptops, engineers who need to apply a role will need access to the secrets in the vault. This potentially means giving broad access to the vault password. (See the next major question about single versus multiple vault passwords.)
      • Secrets and the vault password may be stored together as people want ease of use.
    • Central location?
      • Using a central location should help prevent the proliferation of vault passwords.
      • You will still need to figure out how to maintain secrets in vaults. Perhaps this lessens the likelihood that engineers store vault passwords locally.
  • Does each role vault have its own password?
    • Depending on your Ansible usage, this may be useful. If small teams develop and manage their own roles on their own, and Ansible is not centrally managed, it may make sense for each vault to have its own password.
    • Keep in mind how much overlap of secrets there might be across services. If teams are encrypting the same set of secrets in different vaults with different passwords, you might not have gained much. But if they are different, this may be useful for limiting damage due to unauthorized access to a laptop with Ansible roles and a secret on it.
    • It will be useful to have a place to centrally manage all these vault passwords.


Chef data bags are Chef’s way of storing environment data. Global variable values are defined in JSON formatted files. Unlike Puppet and Hiera, this is a Chef feature that you should already be using. You shouldn’t be tied up spending time on how to start using data bags and implementing them properly into your Chef codebase. Chef data bag documentation is located here:

Encrypted data bags have top level keys whose values are encrypted. Those values may have multiple sub keys with their own values. This means the entire file is not encrypted, but all the values in it are. You cannot mix encrypted and unencrypted values in a single data bag file. Access to the data bags is provided via a single symmetric decryption key. The main challenge with encrypted data bags is managing access to this secret for both those that need to work with encrypted data bags and the Chef server. To deploy the secret to the Chef server, you can probably reuse the same methods proposed for the Puppet Hiera secret key:

  • Cloud-init
  • AWS S3
  • Chef itself

Managing access by users is a little more difficult. It suffers from the same key proliferation issues that Ansible suffers from. However, since Chef uses a central server model, there should be little need to distribute the key widely across an organization.

Secrets Deployment

Deploying secrets should be little different than if you were using git-crypt. Most likely your configuration management tool will be used to drop a configuration or environmental variables file that will be used by the application.


What have you solved by using configuration management?

  • You have reduced the likelihood of leaking secrets to the outside. Rather than every repository being a potential point of leakage, you have consolidated your secrets into a single repository that you can better watch. You might, for example, place tighter access restrictions on this repository. It is easier to accurately manage access controls on a single repository than on all of your repositories.
  • With Puppet and Chef, though not necessarily Ansible, you have also reduced the threat of leaking secrets through the loss or theft of a laptop.
  • Additionally, you have centralized your passwords better. Rather than having them distributed across repositories, you have centralized them in a single repository. This makes management and rolling secrets easier.
  • You have also removed secrets from engineering laptops.
  • Finally, depending on how you decide to have the application read the passwords, you may be exposed to the same application attacks mentioned with git-crypt.


Another method for managing secrets involves placing them in AWS S3 buckets. Secrets will live in an S3 bucket, and during deploy or application initialization, they will be retrieved. This is a very popular method where organizations want to move secrets out of a code repository but where secrets are not managed via configuration management.

Its key advantages are that:

  • Secrets are not stored in a code repository.
  • Secrets can be centralized to a single location.

Its disadvantages can include:

  • Increased management overhead.
  • Having to use an additional system for a service to operate. Fortunately AWS S3 is quite stable. If S3 breaks, there is a good enough chance that other things will also be broken, and your breakage will be forgotten about.
  • Without using AWS KMS, secrets will be stored in plain text. And without rigorous S3 management, you may find forgotten credentials exposed. Tools like Terraform for managing AWS resources are still new and are not widely adopted. Much AWS configuration is still done by hand.

There are many ways to manage AWS resources and in this particular case, we can use an AWS S3 object. If you are using Terraform or are already using or are interested in using your configuration management system’s AWS resources, those are great routes to go. However, in this discussion, we are going to use a command line tool called sneaker to illustrate the work necessary to use this method of secrets storage.

Sneaker Setup

sneaker is written in Go, which is packaged for most operating systems today. A quick runthrough of setting up Go on Mac OS is provided below.

  1. To download and install Go:

    [[email protected]:aws-straycat sneaker]$ brew install go
    ==> Downloading
    ######################################################################## 100.0%
    ==> Pouring go-1.7.5.sierra.bottle.tar.gz
    ==> Caveats
    As of go 1.2, a valid GOPATH is required to use the `go get` command:
    You may wish to add the GOROOT-based install location to your PATH:
      export PATH=$PATH:/usr/local/opt/go/libexec/bin
    ==> Summary
    ?  /usr/local/Cellar/go/1.7.5: 6,440 files, 250.8M
    [[email protected]:aws-straycat sneaker(master)]$ which go
  2. Once Go is installed, set up Go in your environment so you can fetch and install Go binaries:

    [[email protected]:aws-straycat ~]$ cd Source/
    [[email protected]:aws-straycat Source]$ pwd
    [[email protected]:aws-straycat Source]$ mkdir go
    [[email protected]:aws-straycat Source]$ cd go
    [[email protected]:aws-straycat go]$ export GOPATH="$(pwd)"
    [[email protected]:aws-straycat go]$ export PATH="${GOPATH}/bin:${PATH}"
  3. Now proceed to fetching sneaker and installing it:

    [[email protected]:aws-straycat go]$ go get -d -u
    [[email protected]:aws-straycat go]$ cd src/
    [[email protected]:aws-straycat sneaker]$ make install
    go get -u
    go install
    touch cmd/sneaker/version.go
    /Users/tmclaughlin/Source/go/bin/govendor sync
    /Users/tmclaughlin/Source/go/bin/govendor install  -ldflags "-X "main.version='76cfcf0'" -X "main.goVersion='go version go1.7.4 darwin/amd64'" -X "main.buildTime='2017-02-01T01:07:37Z'"" +local
    [[email protected]:aws-straycat sneaker]$ sneaker version
    version: '76cfcf0'
    goversion: 'go version go1.7.4 darwin/amd64'
    buildtime: '2017-02-01T01:07:37Z'

    Sneaker is now installed and ready to be used.


  1. Create a key from using the awscli. sneaker will use the KeyId to encrypt the secret objects stored in S3.
  2. Once you have created the key, export it in your environment.
  3. In addition, store it in a safe place so it can be referenced again at a later date, for instance, to update the secret.

You will also need the ARN of the key:

[[email protected]:aws-straycat go]$ aws kms create-key --description threatstack-to-s3-dev
    "KeyMetadata": {
        "KeyId": "0f579331-24ae-4141-89e3-202668e7a7dd",
        "Description": "threatstack-to-s3-dev",
        "Enabled": true,
        "KeyUsage": "ENCRYPT_DECRYPT",
        "KeyState": "Enabled",
        "CreationDate": 1485914174.265,
        "Arn": "arn:aws:kms:us-east-1:513551718909:key/0f579331-24ae-4141-89e3-202668e7a7dd",
        "AWSAccountId": "591511214639"
[[email protected]:aws-straycat go]$ export SNEAKER_MASTER_KEY="0f579331-24ae-4141-89e3-202668e7a7dd"

S3 Bucket Creation

  1. Create an S3 bucket where the secret will be held. In the current example, a single bucket should be sufficient for basic needs.
  2. After you create the bucket, set the ACL to private. Since we are taking a layered approach to securing secrets, you want to restrict access to the bucket and objects through bucket policy and require the decryption key in order to see the data.
  3. Finally, export the bucket you created as an environmental variable that sneaker will use:

    [[email protected]:aws-straycat go]$ aws s3 mb s3://
    make_bucket: s3://
    [[email protected]:aws-straycat go]$ aws s3api put-bucket-acl --bucket --acl private
    [[email protected]:aws-straycat go]$ export SNEAKER_S3_PATH="s3://"

Deployment: Putting a Secret in S3

Put the Threat Stack API key from threatstack-to-s3 into a file and store it in S3:

[[email protected]:aws-straycat go]$ echo "THREATSTACK_API_KEY='6hVZ07n9V2vv21saoJTkZNiJRDdVG0OBAqRRTm8323xswyFODhqhdiwanZVorK6jkl1aMci5'" >
[[email protected]:aws-straycat go]$ sneaker upload dev/threatstack-to-s3/
2017/01/31 21:21:43 uploading
[[email protected]:aws-straycat go]$ sneaker ls
key                               modified          size  etag
dev/threatstack-to-s3/  2017-02-01T02:21  70    8d81ac64a54d14e02b3ff6a0f4867ec2

Giving Access to an EC2 Instance

To give your application access to the secret in S3, you will have to make sure that the sneaker binary is distributed to all your hosts. It’s Go, so if you don’t have a packaging infrastructure, you can build the single binary once and install it on any of your hosts however many different ways there are. The bigger question is how do you provide a host with access to the secret in S3? For that you will use an AWS IAM role that will be attached to the hosts that run threatstack-to-s3.

To do this, create a role and attach the following policy document:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "Stmt1485916844000",
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Sid": "Stmt1485916844001",
            "Effect": "Allow",
            "Action": [
            "Resource": [

This policy document grants a host the ability to fetch the secret file and use the KMS key to decrypt it. Note: Only allow hosts are allowed to retrieve the secret file:

[[email protected] bin]$ AWS_REGION=us-east-1 ./sneaker download dev/threatstack-to-s3/ ./
2017/02/01 03:12:47 downloading ./
[[email protected] bin]$ cat ./

One of the advantages of the new bucket with private ACL is that the host only has access to the secret files it is explicitly given access to in its own role. Keep that in mind as a good reason to not reuse an existing bucket. You don’t want to expose all your secrets to every host:

[[email protected] bin]$ AWS_REGION=us-east-1 ./sneaker download prod/threatstack-to-s3/ ./
2017/02/01 03:25:24 downloading ./
2017/02/01 03:25:24 AccessDenied: Access Denied
        status code: 403, request id: FC9CF20F3B825DF3

At this point, you can source and start the threatstack-to-s3 service.


  • You have removed the secrets from your application repos, which is good.
  • You also have multiple layers of security for your secrets: Someone needs both AWS IAM credentials and the KMS key in order to read the secrets. At this point, you should determine who should have the ability to add new secrets or rotate existing secrets. If you expect teams to own that management, then there is a good chance the added layer of security is still vulnerable to a developer’s laptop being compromised because there is a high likelihood that their AWS credentials will be on it and possibly the KeyId too. In fact, if you are using AWS management tools like Terraform or CloudFormation, the KeyId may be in the repository you use to control your AWS environment.
  • Additionally, depending on how you decide to have the application read the passwords, you may be exposed to the same application attacks that were mentioned in relation to git-crypt.


At this point your secrets management is not perfect, but you have made significant strides in a short period of time. In addition to securing your secrets more strongly than they were before, you have also made secrets management easier from an operations standpoint. You have now improved the ability to find them when needed. And now that you can find them, you can better plan for rolling them in your infrastructure on some sort of regular basis. In fact, changing these previously exposed secrets is exactly what you should do now.

If you want to learn more about cloud security, consider reading our guide of the most essential cloud security blogs.