Dan Geer’s Mandated Breach Reporting Vision Of The Cyber-Future: Can The Security Industry Help?

Note: We’re taking a break this week from our weekly SecDevOps series to offer some key takeaways from Black Hat which our co-founder, Jen Andre, attended.

By now, many of the talks of Black Hat 2014 have been published online. Dan Geer, who delivered this year’s opening keynote in Vegas, has once again lit up the security community with his controversial vision of security and privacy in the internet of the near future.

In one proposal, he outlines a policy where organizations are mandated to report security breaches (within a certain scope). Here, he draws a corollary to the CDC. Currently, medical providers are required to report any observed instances of ‘certain communicable diseases’ to the Center for Disease Control, in order to mitigate the public health risk of a widespread pandemic. Dan posits: should we have similar mandates for reporting security breach events?

To anyone who has been the victim of identity theft or privacy violations due to mismanaged security practices, the answer to this question is a resounding: “of course”. After all, you don’t want to find out on a mortgage application that someone has hijacked your credit score because some crucial PII was leaked by an insecure web app you used 5 years ago.  Or that you can’t board a plane because at some point your stolen PII was used to create an account to launder money to a terrorist group you never heard of.

The argument for mandated security breach reporting is not new, though it remains controversial.

If we look at the state of breach reporting today, there is already legislation in the United States that mandates exactly that for certain classes of data. PCI-compliant entities are required to report credit card breaches up to their private financial institutions (and with this comes serious consequences, e.g. heavy fines per card stolen), and many states (California leading the charge) also require reporting directly to the owners of the data stolen. HIPAA mandates similar disclosures. And no, the US isn’t the only country where this kind of legislation is being mandated.

Can we presume a similar trend for other classes of data going forward?   

Naturally, the devil is the details. Proponents argue that mandated breach reporting is a compelling impetus for organizations to be better stewards of data that is increasingly replicated across cloud services worldwide. Yet I would argue that there are several philosophical and organizational challenges (and by organizational, I mean organizing bodies who control or “own” the internet infrastructure) and technical challenges that make this difficult to execute on at best, and infeasible at worst, except in the most regulated of industries.

If we presume such legislation is inevitable, what kinds of challenges do businesses and other organizations face for compliance? And what kind of technological solutions can we expect to evolve as an industry to make this easier? How does this affect the widespread explosion of SaaS businesses (which have been enabled by the explosion of IaaS) in today’s “cloud”-enabled world?

Let’s start with some of the technical challenges, which are:

a) How do I know if I’m breached?

The fact is that much of the time, organizations have no idea they’ve been compromised. According to the Verizon DBIR (referenced by Geer in his talk) 70-80% of all breaches are reported by unrelated 3rd parties.

This is an interesting challenge when it comes to penning legislation — it makes little sense to have mandates on breach reporting without a specified time window (otherwise anyone could take advantage of a loophole where postpone notification for years). Yet, many SaaS businesses operating today, which in turn are enabling rapid innovation for other businesses, do not have the expertise in-house to know if they are breached, never mind have the capability of response within a certain time window. Which brings us to the other problem:

b) How do I know the scope of the data compromised?

Technically, it’s often quite difficult to reconstruct the path of an attack. As Dan pointed out, the security industry is becoming increasingly specialized, and not everyone can afford to have a security forensics expert on-hand or pay for breach notification services. It is hopeful that security industry innovation in audit logging for systems, APIs, and applications will hopefully put this kind of data in the reach of non-specialized systems operators, but in the meantime:  is a single entry in an application log good enough to assume the best case, or the worst possible scenario?

c) Who is responsible?

Even if you limit the scope of a breach to certain classes of data, data lives online in complex systems. There are many attack vectors, and the scope of the entities responsible for breach reporting remain fuzzy.

It’s obvious to almost anyone that the people who handle our credit cards should be required to notify us if that data is stolen, but what about the Facebooks of the world? Think about the proliferation of shared authentication: If I use my Facebook credentials to log into my medical records site, is Facebook now mandated to report breaches within a certain time frame to the users of its auth services, so that the medical records site can respond accordingly? Never mind if you legislate widespread PII breach reporting — the personal information embedded in private social networking sites that could allow an attacker to do real damage (e.g. impersonate me well enough to get access to my Amazon account).

If I’m an Infrastructure as a Service provider, and I discover a widespread attack against some hosted file store I provide, am I to know that some of the data being stored falls under the breach reporting requirements?

Take it one level down:  If I’m an internet service provider, and a router is compromised that allows hijacking in such a way that I’m able to compromise a medical records site, am I responsible for reporting? As a common carrier, should I even care about the content that I serve?

The philosophical argument

Finally: the internet’s power lies in its roots as an open communications platform. If some kid creates a web app that calculates my personality type based on a quiz that asks for my birthday, full name, and my blood type, do we enforce breach reporting? Certainly we can enforce that person’s ability to establish a business and make money within the purview of the government that holds jurisdiction, but that raises new concerns.

Will we start to see micro-internets, whose boundaries are dictated by the government entities that control the underlying infrastructure and who has access to what? It’s not hard to see that mandated breach reporting would impose real cost to innovation. I can envision an internet where the reporting onus is so tough that creating a new social media app in the basement with your friends is an impossibility. And so lies the rub.

Will internet startups choose to launch their businesses in the darker, freer parts of the internet whose infrastructure is controlled by more friendly governments (a la what’s happened to online gambling)? Geer alludes to this — the compartmentalization of internet security policy given the power of state actors and the tension between what we value as our freedom on the internet, and the role of cyber security. Already, the Chinese internet is a very different place from the one you and I know.

In such a world, how can technology help? Can the security industry provide software that mitigates the costs of breach reporting requirements with better-automated ways of detecting breaches? Will such technology be usable and affordable by mass of startups that are out there today building real businesses and driving technology forward?

What do you think? Are Dan’s ideas about mandated breach reporting farfetched, or do they represent a world we could soon live in?