Archive

Archive for May, 2010

Updating Shannon’s Maxim

May 28, 2010 Comments off

May 5th of this year,  I gave a presentation at the 2010 IEEE Key Management Summit image in Lake Tahoe and the slides and videos are being posted online.  My presentation can be viewed in High-Def (400M) or the usable (200M) version.  This conference was all about key management and crypto stuff.

I talked about a flaw in cryptographic design principles.

Existing Guidance

Kirckhoffs wrote several principles in 1883 on designing cryptosystems for military applications, his second principle is the most enduring:

It must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience.

Which basically says that secrets should be in the right place and not designed into the system because the enemy will eventually figure it out.  This was greatly simplified by Claude Shannon when he said “The enemy knows the system” (around the 1940’s best I can tell).  It really gets to the point that adversaries are motivated to dig in and uncover how things work.  We don’t invent our own algorithm and we should assume everyone is smarter than us. This is great advice but we, as an industry, routinely fail to follow this simple design principle.  I give some examples in my talk of projects I’ve worked on that don’t heed this guidance.

But I’ve also found products that met Shannon’s Maxim and yet still had security problems.  They assumed everyone was smarter and not just adversaries.  Which results in an incorrect assumption that administrators and users share the same passion and motivation towards the solution as adversaries.

Updating Shannon

I wanted to keep Shannon’s maxim because we still need to learn that lesson, we need to understand how to handle secrets and where to put our trust.  But we also need to account for the motivation of administrators, operators and users, which generally is not security.  To that end, I created an updated maxim:

The enemy knows the system, and the allies do not.

Repeat it to yourself.  It’s short enough to memorize, it’s mobile enough to carry it around and whip it out on a moments notice.  This acknowledges that the people configuring the system care a lot more about making it operational then they do about making it secure.  They are tasked with delivering on some other primary task: enabling email, setting up a service for business clients.  Security concerns are secondary and often aren’t discovered until much later, even then it was probably the security teams fault.

As we design our solutions, cryptographic or not, we need to account for this motivation, we need to build security options to align with the operational options.  We need to understand that people aren’t motivated to evaluate all the options and pick the best, they are motivated to pick the first option that works and move on.  That realization means that if we list rot13 as a viable encryption algorithm, someone, somewhere will select it and operate that way.

Sometimes this means taking away options.  What would happen if every browser just failed for an invalid certificate?  What if we didn’t assume the user could read x.509 and just simply failed… how quickly would providers figure out how to create and maintain valid certificates?    How much would companies invest in maintaining certificates if the service itself depended on it?   Now, I’m not suggesting this is a solution to the PKI/X.509/SSL problem, just that there are opportunities to align security goals with operational goals and we should seek those out, even create those instances.

Advertisements

Four Myths of Risk

May 25, 2010 Comments off

I love getting feedback, especially constructive feedback.  Take this feedback from a friend who read my blog post “Grand Unifying Definition of Risk”,

“Way to go Jacobs… what a colossal waste of time.  Way to blog about something nobody gives a crap about.”

I, of course, had some follow up questions and a healthy discussion ensued.  I learned that my friend, who is by all accounts reasonably intelligent, saw no connections between that post and reality.  What did I miss?  What could I have done differently and where did his reality diverge from my post?   Here are a series of myths that I uncovered in and since that conversation.

Myth 1:Risk belongs with a risk management group

Fact is, most everyone working in I.T. makes risk-based security decisions every project, everyday.  It’s just that they don’t think of these things as “risk” decisions, they think of them as getting stuff done.  Decisions are part of our daily experience but nobody gives much thought to the intuitive risk analysis that goes into each decision, and perhaps worse –  nobody thinks about how they may improve on the analysis or those decisions. 

The first step in addressing any problem is realizing that a problem exists.  It’s more complicated in infosec because we all realize a problem exists, but we make the mistake of thinking hardware or the Next Best Thing will solve it rather than looking towards the people with fingers on the keyboards and what their decisions mean.

Myth 2: Spherical Cows are useless

When I pointed out the story of the spherical cow in this conversation, it struck home.  According to my myopic friend, I was talking about theoretical blatherings that didn’t have any impact on reality.  I don’t disagree, but the important distinction is not yet.  We need to start with the theory and build from there.  image

I see this lack-of-reality when reading about decision theories which say things like “this method assumes that perfect data exists”.  But there is a value in understanding how things work in situations with less variables before more complexity is introduced.  Spherical cows are great theories for working out multiple ways (not) to solve a problem, just don’t assume the farmer will care.  In other words, just because my reality has wacky theories doesn’t mean everyone else’s does too.

Myth 3: Can’t teach an old dog new tricks

Yeah, that’s a myth and you know who you are.

Myth 4: People Don’t Care

During the conversation, as I mentioned some new program or another, he said “you can’t implement a program to make people care.”  Brilliant.  Spot on and brilliant.  Except people do care, a whole lot, just not always about the things we’d like them to care about.  People care a about keeping their job, perhaps having good coffee in the break room, or going up north for the weekend.  The trick to instigating positive change is aligning what people already care about with the positive change we are seeking.  In other words, we shouldn’t just be figuring our how to write secure code, we should figure out what our developers care about and how that can be related to secure coding practices.

If anyone remembers what infosec was like in the 90’s but security back then was even more embarrassing than it is now.  Recommending an internet facing firewall and having users change their default passwords were staples on the few security assessments performed back then.  How did we go from there to huge stinkin’ security budgets and controls?  Regulations, or more specifically, enforcement of regulations forced the alignment of (checklist) security with something they already care about: not getting fired, getting paid and still making it to lunch. 

I can honestly say, that my theories of risk and security do not mean squat except to maybe a handful of people who largely want to assume with me that we exist in a vacuum without the influence of gravity.  Once we get comfortable with the theory, then we can begin figuring out how to deal with the reality of people not giving a crap about it.

Categories: Risk Tags: ,

Name Change: Behavioral Security

May 23, 2010 Comments off

I’m changing the title of this blog from something I picked because I had a blank field in front of me to “Behavioral Security”.  Partly because it’s less to type, but also I think it is much more to the point of my thinking: information security is mostly about human behavior.  In order to make improvements we need account for the humans that think, create, install, use, analyze and break the technology and processes protecting our goody baskets. 

Behavioral Security uses social, cognitive and emotional factors in understanding the decisions of individuals and institutions in the management of information.

I largely took that from the Wikipedia definition for “behavioral economics” and I think it needs some tuning, but yeah, that’s the theory I’m sticking with for now.

The Grand Unifying Definition of Risk

May 20, 2010 10 comments

My plan is to walk through my current thinking on the complex field of risk management.  No way is this going to be a short post.

Ya know how some presentations begin with “Webster’s defines <topic> as …”?  Way back in time, when I started my frustrating journey with the concepts in risk management I did the same thing with “risk” for my own benefit.  Try it sometime, go out and look for a definition of risk.  FAIR defines it as “probable frequency and probable magnitude…”, while NIST SP800-30 defines it as “a function” (of likelihood and impact), NIST IR 7298 defines it as a “level of impact…”, ISO 27005 refers to it as a “potential” and a Microsoft Risk Management Guide defines risk as “the probability of a vulnerability being exploited”.  One of the better ones I’ve seen recently comes from ISO 31000, which defined risk as the “effect of uncertainty on objectives”, me likey that one (it is the impetus for this writing).

But what the hackslap are we trying to measure here?  Are we trying to measure an effect? A probability/magnitude/consequence?  Short answer is a very firm and definite maybe.

Finally, after years of sleepless nights trying to slay this beast I think I’ve come far enough to put a stake in the ground.  My screw-you-world-I-will-do-it-myself attitude comes through for me with this, my grand unifying definition of risk:

Risk is the uncertainty within a decision.

Where’s the likelihood you ask?  I’ll get to that, but what I like about this is that it’s high level enough to share with friends.  Keeping the function or formula out of the definition (which is 95% of definitions) makes it portable.  This type of definition can be passed between practitioners of Octave, NIST, FAIR and others.   There are two parts to my definition: the uncertainty and the decision.  The term “uncertainty” doesn’t sit entirely well with me, so depending on the audience I’ll throw in “probability and uncertainty” or just leave it with “probability”. 

On Uncertainty

What I mean by uncertainty is the combination of our own confidence/ability/limitations on our estimation of the probability of a series of complex and interdependent events.  Quite simply, the first part of my formula includes people and mathematical models.  With people representing our own limitations, biases, ingeniousness, irrationalities, fears and adaptability and the math representing the risk models and formulas most people consider risk analysis.  Risk models are a just a component of the first part of my definition – they are a strong contributing component, but really they are in here to support the second part, the decision.

Factoring in the people means that information risk requires an understanding of behavioral economics, psychology, and game theory (to name a few).  Because we’re not going to understand the efficacy of our own assessments, nor are we going to effectively address risk if we don’t account for the human element.  While most assessments focus on the technology, meaningful change in that technology can only be influenced by people and through people – the people that thought it, created it, installed it, tested it and eventually use it and break it.  We need to account for the human element otherwise we’re destined for mediocrity.

On Decisions

The other important consideration is the context of risk and I haven’t come across an instance of risk analysis that wasn’t performed to assist in some type of decision process.  That means that we get to drag all of the decision sciences into this funball of risk.  To simplify what I mean: We need to understand the influence of how we frame our problems, we need to gather the right kinds of data (largely from the uncertainty portion) and identify options.  From there we need to come to some kind of conclusion, execute on it and finally (perhaps most importantly) we need feedback. We need a way to measure the influence our decisions had so that we may learn from our decisions and improve on them over time. Feedback is not (just) part of the risk model, it’s part of the entire decision process. 

And now I’ll make up a formula, so it messes with people:

Risk = Decision(People(Math)) or wait, how about People(Decision(Math)).  The concept is that the risk models are an input into the Decision process.  I think it goes without saying that the models should never become the decision.  And we cannot forget that every part of this process may be greatly influenced by the human element, from the inputs into the models, the models themselves and the execution and feedback on the decisions.

On Dasher, on Prancer

There is a huge risk within my definition of risk for paralysis.  There are many other seemingly disjointed fields of study that are intertwined here, each at their own level of development and maturity.  I think we’re on the right path, we’re trying things, we’re out there talking about risk and plugging in risk models, creating and adapting methodologies, we’re making decisions and every once in a while stumble on some feedback.  I’m not suggesting that we stop and re-evaluate the whole thing.  On the contrary, we should continue full steam ahead but with constant re-assessment and questioning.  I’m not entirely optimistic that we’ll ever get to some grand unified theory of risk in my lifetime, nor am I optimistic that any one definition will necessarily stick, but that doesn’t mean I’m not going to try.

One last point, I don’t want to say that risk is only infosec-centric.  Far from it.  Information Security risk is but one tiny offshoot of risk and we have much to learn from (and contribute to) other areas.  We need to be on a constant lookout for giants so that we can stand on their shoulders.

Categories: Risk Tags: ,

Biases and Fallacies: Infosec Style

May 18, 2010 1 comment

I’m starting a blog and rather than give you some long drawn out intro into who I am and why I think I have something to say I’ll just jump right in right after this short introduction:  I enjoy information security, I enjoy cryptography and I enjoy discussions on risk.

On to my point here.

One of the biggest problems in security is human element. Wait, let me rephrase that, one of the biggest areas I think security could be improved is by accounting for the human element.  Humans are full of logical fallacies and cognitive biases.  They are horrible at collecting, storing and retrieving information (especially sensitive information) and they stink as cryptographic processors.  Yet, they are essential components and have one great quality: they can be predictableFallacy Detective

One of the things we can do as owner/operator of one of these machines is to understand how we work (and conversely don’t work).  To that end, I’ve read (and re-read) various articles, sites and books about our biases and fallacies.  There are plenty of good sources out there, but the only way I’ve found to overcome biases and fallacies is to learn about them and try to spot them when we encounter them, and most everyone experiences these daily.  Please feel free to take some time, read through the lists and try to keep an eye out for them in your daily routine.

To kick things off, the infosec industry has some easy-to-spot biases.  We have the exposure effect of best practices, and the zero-risk bias is far too prevalent, especially in cryptography.  We have enormous problems with poorly anchored decisions (focusing on the latest media buzz-words) and also we are drowning in the the Van Restorff effect on sensational security breaches.  But that’s not what I’m really going to talk about.  As I was reading the Wikipedia entry on the List of Cognitive Biases, I thought some of them sounded completely made up.  That got me thinking… Hey! I could make some up too.  We must have our own biases, errors and fallacies, right?  I mean, we have enough false consensus that we can think we’re unique and have our own unique biases and fallacies, right?

After one relatively quiet evening full of self-entertainment, I have come up with my first draft of security biases and fallacies (I will spare Wikipedia my edits).  Some of these are not exclusive to infosec, but made the list anyway.  In no particular order…

Defcon Error: Thinking that there are only two modes that computers operate in: Broken-with-a-known-exploit and Broken-without-a-known-exploit.

Moscone Effect: an extension of the Defcon effect but additionally thinking there is a product/service to help with that problem.

PEBCAK-Attribution Error: thinking that security would be easy if it weren’t for those darn users.

Pavlov’s Certificate Error: Thinking that it is somehow beneficial for users to acknowledge nothing but false-positives.

Reverse-Hawthorne Effect:  The inability to instigate change in spite of demonstrating how broken everything is, also known as the metasploit fallacy.

Underconfidence effect: The inability to instigate change in spite of rating the problem a “high”, also referred to as risk management.

The Me-Too error: Reciting the mantra “compliance isn’t security”, but then resorting to sensational stories, unfounded opinions and/or gut feel.

CISSP Error: ‘nuf said

The Cricket-Sound Error: thinking that other I.T. professionals value a secure system over an operational system.

The Not-My-Problem Problem: Thinking that administrators will weigh all the configuration options carefully before selecting the best one (rather than the first one that doesn’t fail).

Policy Fallacy:  a logical fallacy that states: Companies produce policies, employees are supposed to read policies, therefore employees understand company policies.

The Mighty Pen Fallacy: an extension of the Policy Fallacy, thinking that just by writing something into a policy (or regulation), that dog will poop gold.

Labeled-Threat bias: Thinking that a pre-existing threat must be solved through capitol expenditure once it has a label, also known as the the “APT bias”, more common among sales, marketing, some media and uninitiated executives.

Illusory Tweeting: thinking the same media used to instantaneously report political events and international crisis’ around the world is also well suited for pictures of your cat.

Security-thru-Google Fallacy: I don’t have one for this but the name is here for when I think of it.

Encryption Fallacy:  if encrypting the data once is good, then encrypting it more than once must be better.

Tempest Bias: the opposite of a zero-risk error: the incorrect thinking that rot13 doesn’t have a place in corporate communications.

I know there are more…  Happy Birthday blog o’ mine.