Archive

Posts Tagged ‘IEEE’

Updating Shannon’s Maxim

May 28, 2010 Comments off

May 5th of this year,  I gave a presentation at the 2010 IEEE Key Management Summit image in Lake Tahoe and the slides and videos are being posted online.  My presentation can be viewed in High-Def (400M) or the usable (200M) version.  This conference was all about key management and crypto stuff.

I talked about a flaw in cryptographic design principles.

Existing Guidance

Kirckhoffs wrote several principles in 1883 on designing cryptosystems for military applications, his second principle is the most enduring:

It must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience.

Which basically says that secrets should be in the right place and not designed into the system because the enemy will eventually figure it out.  This was greatly simplified by Claude Shannon when he said “The enemy knows the system” (around the 1940’s best I can tell).  It really gets to the point that adversaries are motivated to dig in and uncover how things work.  We don’t invent our own algorithm and we should assume everyone is smarter than us. This is great advice but we, as an industry, routinely fail to follow this simple design principle.  I give some examples in my talk of projects I’ve worked on that don’t heed this guidance.

But I’ve also found products that met Shannon’s Maxim and yet still had security problems.  They assumed everyone was smarter and not just adversaries.  Which results in an incorrect assumption that administrators and users share the same passion and motivation towards the solution as adversaries.

Updating Shannon

I wanted to keep Shannon’s maxim because we still need to learn that lesson, we need to understand how to handle secrets and where to put our trust.  But we also need to account for the motivation of administrators, operators and users, which generally is not security.  To that end, I created an updated maxim:

The enemy knows the system, and the allies do not.

Repeat it to yourself.  It’s short enough to memorize, it’s mobile enough to carry it around and whip it out on a moments notice.  This acknowledges that the people configuring the system care a lot more about making it operational then they do about making it secure.  They are tasked with delivering on some other primary task: enabling email, setting up a service for business clients.  Security concerns are secondary and often aren’t discovered until much later, even then it was probably the security teams fault.

As we design our solutions, cryptographic or not, we need to account for this motivation, we need to build security options to align with the operational options.  We need to understand that people aren’t motivated to evaluate all the options and pick the best, they are motivated to pick the first option that works and move on.  That realization means that if we list rot13 as a viable encryption algorithm, someone, somewhere will select it and operate that way.

Sometimes this means taking away options.  What would happen if every browser just failed for an invalid certificate?  What if we didn’t assume the user could read x.509 and just simply failed… how quickly would providers figure out how to create and maintain valid certificates?    How much would companies invest in maintaining certificates if the service itself depended on it?   Now, I’m not suggesting this is a solution to the PKI/X.509/SSL problem, just that there are opportunities to align security goals with operational goals and we should seek those out, even create those instances.