Security has such a large number of subtopics that it’s sometimes difficult to define what the field looks like as a whole. It means something vastly different to a Security Engineer, a CISO, and a Developer. Realistically, at most companies, Security is the prevention of leaking customer data or exposing secrets. Usually this manifests as “let’s make sure only the logged-in user can view this information” or “make sure the password is stored securely.” These are important, but they don’t cover enough.
Software isn’t built in a vacuum. You have dependencies — hundreds, if not more — each of which can, and increasingly probably does, have a vulnerability. This is even aside from system-level kernel vulnerabilities and even CPU/hardware level attacks like Spectre, Meltdown, or Rowhammer.
You’ll never be able to completely avoid risks like these, but they can provide an interesting process incentive.
Say you’re a developer working on a project — new features, small bug fixes, maybe some performance and reliability improvements — delivering more value to your customers. And then a CVE gets posted, maybe something minor like a small buffer overload in a library that you don’t use, or maybe something on the scale of Heartbleed that absolutely requires your attention.
The fix for things like Heartbleed or other core libraries can be as straightforward as it is annoying. When Heartbleed struck, I had to update and reboot several hundred servers, most of which hadn’t been rebooted in years. We had limited automation for reboots because it was something that we felt was unnecessary. Security often gets resolved only after the fact, and with a lot of pain.
But here’s where things like containers or VMs — when used correctly — can potentially help your development velocity as well as application security. The idea of ephemerality popular in modern deployment schemes is something that you can and should run with. Redeploying entire stacks — from the Kernel in a VM to your application software, and even your application software running in a canary-mode with new versions of libraries you depend on — is increasingly more and more of a trivial task.
You can build pipelines to perpetually update, test, and validate new kernels, firmware, and core libraries. Had we decided, prior to Heartbleed or the like, that we wanted to use more-current kernels (maybe, update kernels every four to six months), we would have already had a robust process and tooling in place, and the whole act of rebooting wouldn’t have caused a business disturbance at all.
It’s easy to start using — and come to depend on — configuration automation tools like Chef, but it’s easy to overlook the point where configuration ends and deployment orchestration begins. Orchestration systems like Puppet + MCollective provide a better framework for tasks like restarting services across your infrastructure or rebooting hosts with more sophisticated targeting. These capabilities lay the groundwork for better managing your machine images and establishing immutable infrastructure, taking an “always deploy new” approach to orchestration.
While there will still always be attack vectors here — things like your Docker hosts themselves — you can at least focus on those separately, as they likely carry a different risk/reward, from your typical application deployment strategy, and reap the benefits of many container or VM platforms by constantly updating your software, libraries, core libraries, and kernels to stay on-top as possible of ever-more vulnerabilities.