Archive

Archive for August, 2010

Why Risk = Threat and Vulnerability and Impact

August 23, 2010 6 comments

Jeff Lowder wrote up a thought provoking post, "Why the “Risk = Threats x Vulnerabilities x Impact” Formula is Mathematical Nonsense” and I wanted to get my provoked thoughts into print (and hopefully out of my head).  I’m not going to disagree with Jeff for the most part.  I’ve had many-a-forehead-palming moments seeing literal interpretations of that statement.

Threats, Vulnerabilities, Impact

As most everyone in ISRA is prone to do, I want to redefine/change those terms first off and then make a point.  I’d much rather focus on the point than the terms themselves, but bear with me.  When I redefine/change those terms, I don’t think I’ll be saying anything different from Jeff but I will be making them clearer in my own head as I talk about them. 

In order for a risk to be realized, a force (threat) overcomes resistance (vulnerability) causing impact (bad things).

We are familiar with measuring forces and resistances (resistance is a force in the opposite direction) which is why we see another abused formula: Risk = Likelihood * Impact.  Because threat and vulnerability are both a force and may be easily combined into this new “likelihood” (or insert whatever term represents that concept).  And now here is the point:

For a statement of risk to have meaning the measurement of threat, resistance and impact cannot be combined nor simplified.

I’ll use acceleration as an example, acceleration is measured as the speed and direction of something over time.  There are three distinct variables that are used to convey what acceleration is.  We can not multiply speed and direction.  We cannot derive some mathematical function to simplify speed and direction into a single number. It quite simply is stated as distinct variables.  Meaning is derived by the combination of them.  The same is true with a measurement of risk, we cannot combine the threat and the resistance to it and still maintain our meaning.

For example if we have a skilled attacker applying a force to a system with considerable resistance, it is completely not the same thing as my 8-yr old running metasploit against an unpatched system.  Yet, if we attempt to combine these two scenarios we may end up with the same “likelihood” and they very clearly are different components of a risk with different methods of reducing each risk.

On Risk Models

Since any one system has multiple risks, saying that risk components cannot be combined or simplified is problematic.  Most decision makers I’ve known really likey-the-dashboard.  We want to be able to combine statements of risk with one another to create a consumable and meaningful input into the decision process.  Enter the risk model.  Since the relationships between risks and the components that make up a risk are complex, we want to do our best to estimate or simulate that combination.  If we could account for every variable and interaction we’d do so, but we can’t.  So we model reality, seek out feedback and see how it worked, then (in theory) we learn, adapt and try modeling again. 

We cannot simplify a statement of risk into a single number, but we can state the components of risks as a probability a force will overcome resistance with a probable impact.

We want to be aware of how the components of risk do or don’t interact and account for that in our risk models.  That’s where the secret sauce is.

Categories: Risk Tags: , ,

Acts of Joe Algorithm

August 5, 2010 Comments off

Every once in a while I am blessed with enough time to catch up on the writings of Gunnar Peterson and his post “Acts of God Algorithm” points out some problems being felt in the insurance space.  He ends with this little nugget:

we have a similar situation in infosec where models are  predicated on inside the firewall and outside the firewall, however that model divereged from reality about 10 years ago.

One of contributing factors to this problem is the controls-based versus scenario-based assessments.  We get stuck thinking that if we just look at “common” controls or following someone else’s “best practices”, that we should be good.  The fundamental flaw in that is thinking that the rules of the game are constant and the rules are anything but constant.looks like some other auditors I met, not you, of course

The Constants

There are two constants in infosec: 1) people will always try to do things they shouldn’t and 2) everything else changes.  It’s one thing to work in a field like accounting where the rules of math don’t change, 2 + 2 always equals 4.  Engineers can learn the basic laws of physics and apply the same formulas they learn in school 30 years later.  Those rules are relatively static.  Sure, those fields have advancements, they learn how to do something better or more efficiently but the foundations they build on won’t change.  We aren’t so lucky in infosec.

Controls Based Assessments

Let’s walk through a typical “risk assessment” at a very high level.  This is the basic process as sold/promoted by various organizations and overpaid consultants:

  1. Start with list of controls (ISO/COBIT, etc) to check
  2. Walk down the list of controls
  3. When controls are insufficient, prioritize/rank/rate this “risk”

Without getting into all the problems and failures in this process I just want to focus on the first step: start with a list of controls.  In other words, let’s assume that the rules are static.  If we can figure out what stopped a person from doing something wrong in one instance, it must be good for all the others.  We could apply things like “strong” password rules and continue to think that our network has a perimeter if the rules were static.  In this way, people incorrectly assume that whatever prevented people from doing bad things yesterday will also perform the same way today. 

Relying on controls for our security approach is becoming a bigger and bigger problem.  It’s getting worse because the  industry is being driven more and more by compliance and compliance is all about assessing controls.  But think about this, how do we know when to update those controls?  How do we know that web application firewalls may be appropriate?  How are new controls ever introduced? I’m pretty sure it’s not from just looking at the list of controls.

Infosec is a chess game without structure, it’s about making decisions against a rational opponent who will adapt their actions, even change the rules, based on the decisions we just made.  If we are to analyze and understand the security risks we’re facing we have to play the game out several moves ahead of time.  We have to think in terms of “what if?” with a little “and then?” tossed in.  We can’t sit back and ask everyone to take off their shoes before allowing them to pass.

Don’t get me wrong, I don’t want to say controls-based assessments are the root of all our problems.  I think of controls-based assessments like an incremental system backup.  It’s great for a while, because they can be effective in both time and resources.  But they get out of date, inefficient and nothing beats having a full backup every once in a while.  Taking a scenario-based approach to security enables an organization to reset their footing on level ground which then allow the incremental, control-based checks to continue adding value.  The controls have to be fresh, otherwise we diverge from reality and get stuck playing today’s game by yesterday’s rules.

Categories: General Security, Risk