Most enterprises do not build software or operate infrastructure the same way Netflix does. But there’s a lot to learn from the Silicon Valley world that an enterprise can aspire to as policy to improve security posture. Forward-thinking CIOs should work with the security function of an organization to adopt technology and practices that will empower defense. Here are some examples:
This is part of a series we’re calling ‘Securing Modern Infrastructure’, where we explore the implications of modern development and operations pipelines from a security perspective.
Recently, a security firm reported what they claimed to be a flaw with a major impact on organizations running Linux. (And apparently since all the rage these days is to give bugs code names, they pre-seeded the market with this timely one: “grinch”).
Linux software bugs have been huge this year, leaving administrators reeling to patch themselves from Shellshock, Heartbleed, POODLE, etc. With claims that this vulnerability could have an impact similar to Shellshock, I really wanted to dive into what the “grinch” bug means in order to separate the fact from the FUD.
The internet is yet again feeling the aftereffects of another “net shattering” vulnerability: a bug in the shell ‘/bin/bash’ that widely affects Linux distributions and is trivial to exploit. The vulnerability exposes a weakness in bash that allows users to execute code set in environment variables, and in certain cases allows unauthenticated remote code execution.
Possible vectors for attack include:
One of things we like at Threat Stack is magic. But since magic isn’t real, we have to come up with the next best thing, so we’ve hired one of the libevent maintainers Mark Ellzey Thomas (we like to call him our ‘mad kernel scientist’) to make our agent the best in its class.
Many of the more savvy operations and security people that use our service are blown away by the types of information we can collect, correlate, and analyze from Linux servers. They say something to the effect of, “I’ve tried to do this with (Red Hat) auditd, with little to no success… how do you guys do it?”
By now, many of the talks of Black Hat 2014 have been published online. Dan Geer, who delivered this year’s opening keynote in Vegas, has once again lit up the security community with his controversial vision of security and privacy in the internet of the near future.
In one proposal, he outlines a policy where organizations are mandated to report security breaches (within a certain scope). Here, he draws a corollary to the CDC. Currently, medical providers are required to report any observed instances of ‘certain communicable diseases’ to the Center for Disease Control, in order to mitigate the public health risk of a widespread pandemic. Dan posits: should we have similar mandates for reporting security breach events?
To anyone who has been the victim of identity theft or privacy violations due to mismanaged security practices, the answer to this question is a resounding: “of course”. After all, you don’t want to find out on a mortgage application that someone has hijacked your credit score because some crucial PII was leaked by an insecure web app you used 5 years ago. Or that you can’t board a plane because at some point your stolen PII was used to create an account to launder money to a terrorist group you never heard of.
The argument for mandated security breach reporting is not new, though it remains controversial.
If we look at the state of breach reporting today, there is already legislation in the United States that mandates exactly that for certain classes of data. PCI-compliant entities are required to report credit card breaches up to their private financial institutions (and with this comes serious consequences, e.g. heavy fines per card stolen), and many states (California leading the charge) also require reporting directly to the owners of the data stolen. HIPAA mandates similar disclosures. And no, the US isn’t the only country where this kind of legislation is being mandated.
Can we presume a similar trend for other classes of data going forward?
Naturally, the devil is the details. Proponents argue that mandated breach reporting is a compelling impetus for organizations to be better stewards of data that is increasingly replicated across cloud services worldwide. Yet I would argue that there are several philosophical and organizational challenges (and by organizational, I mean organizing bodies who control or “own” the internet infrastructure) and technical challenges that make this difficult to execute on at best, and infeasible at worst, except in the most regulated of industries.
If we presume such legislation is inevitable, what kinds of challenges do businesses and other organizations face for compliance? And what kind of technological solutions can we expect to evolve as an industry to make this easier? How does this affect the widespread explosion of SaaS businesses (which have been enabled by the explosion of IaaS) in today’s “cloud”-enabled world?
Let’s start with some of the technical challenges, which are:
a) How do I know if I’m breached?
The fact is that much of the time, organizations have no idea they’ve been compromised. According to the Verizon DBIR (referenced by Geer in his talk) 70-80% of all breaches are reported by unrelated 3rd parties.
This is an interesting challenge when it comes to penning legislation — it makes little sense to have mandates on breach reporting without a specified time window (otherwise anyone could take advantage of a loophole where postpone notification for years). Yet, many SaaS businesses operating today, which in turn are enabling rapid innovation for other businesses, do not have the expertise in-house to know if they are breached, never mind have the capability of response within a certain time window. Which brings us to the other problem:
b) How do I know the scope of the data compromised?
Technically, it’s often quite difficult to reconstruct the path of an attack. As Dan pointed out, the security industry is becoming increasingly specialized, and not everyone can afford to have a security forensics expert on-hand or pay for breach notification services. It is hopeful that security industry innovation in audit logging for systems, APIs, and applications will hopefully put this kind of data in the reach of non-specialized systems operators, but in the meantime: is a single entry in an application log good enough to assume the best case, or the worst possible scenario?
c) Who is responsible?
Even if you limit the scope of a breach to certain classes of data, data lives online in complex systems. There are many attack vectors, and the scope of the entities responsible for breach reporting remain fuzzy.
It’s obvious to almost anyone that the people who handle our credit cards should be required to notify us if that data is stolen, but what about the Facebooks of the world? Think about the proliferation of shared authentication: If I use my Facebook credentials to log into my medical records site, is Facebook now mandated to report breaches within a certain time frame to the users of its auth services, so that the medical records site can respond accordingly? Never mind if you legislate widespread PII breach reporting — the personal information embedded in private social networking sites that could allow an attacker to do real damage (e.g. impersonate me well enough to get access to my Amazon account).
If I’m an Infrastructure as a Service provider, and I discover a widespread attack against some hosted file store I provide, am I to know that some of the data being stored falls under the breach reporting requirements?
Take it one level down: If I’m an internet service provider, and a router is compromised that allows hijacking in such a way that I’m able to compromise a medical records site, am I responsible for reporting? As a common carrier, should I even care about the content that I serve?
The philosophical argument
Finally: the internet’s power lies in its roots as an open communications platform. If some kid creates a web app that calculates my personality type based on a quiz that asks for my birthday, full name, and my blood type, do we enforce breach reporting? Certainly we can enforce that person’s ability to establish a business and make money within the purview of the government that holds jurisdiction, but that raises new concerns.
Will we start to see micro-internets, whose boundaries are dictated by the government entities that control the underlying infrastructure and who has access to what? It’s not hard to see that mandated breach reporting would impose real cost to innovation. I can envision an internet where the reporting onus is so tough that creating a new social media app in the basement with your friends is an impossibility. And so lies the rub.
Will internet startups choose to launch their businesses in the darker, freer parts of the internet whose infrastructure is controlled by more friendly governments (a la what’s happened to online gambling)? Geer alludes to this — the compartmentalization of internet security policy given the power of state actors and the tension between what we value as our freedom on the internet, and the role of cyber security. Already, the Chinese internet is a very different place from the one you and I know.
In such a world, how can technology help? Can the security industry provide software that mitigates the costs of breach reporting requirements with better-automated ways of detecting breaches? Will such technology be usable and affordable by mass of startups that are out there today building real businesses and driving technology forward?
What do you think? Are Dan’s ideas about mandated breach reporting farfetched, or do they represent a world we could soon live in?
At Threat Stack, we’re constantly exploring ways to advance cloud server forensics. We’re especially attentive to this as it’s an area of cloud security that’s becoming more critical since the attack vector of cloud is growing.
Forensic logs can lay out the scope of an attack that’s occurred on your servers, but getting to the bottom of what’s been done is usually much easier said than done. In fact, you can easily find yourself paying up to $600/hr for a security consultant to do this exact work if you don’t have the right tools in the first place. But what does it mean to have the right tools?
Do existing methods work?
Too many times we hear and read about how insecure the cloud is or worse — that the cloud is already secure because IaaS providers have security groups and protection capabilities. These ideologies are all too common and far too wrong. By using outsourced cloud infrastructure, you are only outsourcing your infrastructure, not your security. Security is always your responsibility.