Home > Risk > Why Risk = Threat and Vulnerability and Impact

Why Risk = Threat and Vulnerability and Impact

August 23, 2010

Jeff Lowder wrote up a thought provoking post, "Why the “Risk = Threats x Vulnerabilities x Impact” Formula is Mathematical Nonsense” and I wanted to get my provoked thoughts into print (and hopefully out of my head).  I’m not going to disagree with Jeff for the most part.  I’ve had many-a-forehead-palming moments seeing literal interpretations of that statement.

Threats, Vulnerabilities, Impact

As most everyone in ISRA is prone to do, I want to redefine/change those terms first off and then make a point.  I’d much rather focus on the point than the terms themselves, but bear with me.  When I redefine/change those terms, I don’t think I’ll be saying anything different from Jeff but I will be making them clearer in my own head as I talk about them. 

In order for a risk to be realized, a force (threat) overcomes resistance (vulnerability) causing impact (bad things).

We are familiar with measuring forces and resistances (resistance is a force in the opposite direction) which is why we see another abused formula: Risk = Likelihood * Impact.  Because threat and vulnerability are both a force and may be easily combined into this new “likelihood” (or insert whatever term represents that concept).  And now here is the point:

For a statement of risk to have meaning the measurement of threat, resistance and impact cannot be combined nor simplified.

I’ll use acceleration as an example, acceleration is measured as the speed and direction of something over time.  There are three distinct variables that are used to convey what acceleration is.  We can not multiply speed and direction.  We cannot derive some mathematical function to simplify speed and direction into a single number. It quite simply is stated as distinct variables.  Meaning is derived by the combination of them.  The same is true with a measurement of risk, we cannot combine the threat and the resistance to it and still maintain our meaning.

For example if we have a skilled attacker applying a force to a system with considerable resistance, it is completely not the same thing as my 8-yr old running metasploit against an unpatched system.  Yet, if we attempt to combine these two scenarios we may end up with the same “likelihood” and they very clearly are different components of a risk with different methods of reducing each risk.

On Risk Models

Since any one system has multiple risks, saying that risk components cannot be combined or simplified is problematic.  Most decision makers I’ve known really likey-the-dashboard.  We want to be able to combine statements of risk with one another to create a consumable and meaningful input into the decision process.  Enter the risk model.  Since the relationships between risks and the components that make up a risk are complex, we want to do our best to estimate or simulate that combination.  If we could account for every variable and interaction we’d do so, but we can’t.  So we model reality, seek out feedback and see how it worked, then (in theory) we learn, adapt and try modeling again. 

We cannot simplify a statement of risk into a single number, but we can state the components of risks as a probability a force will overcome resistance with a probable impact.

We want to be aware of how the components of risk do or don’t interact and account for that in our risk models.  That’s where the secret sauce is.

Advertisements
Categories: Risk Tags: , ,
  1. alex
    August 25, 2010 at 9:58 pm

    jay,

    even if we had multiplication of T, V, and I measurements that were in a proper scale, aren’t we still trying to make point predictions for outcomes of a complex system?

  2. August 26, 2010 at 9:00 am

    Alex – yes and no. As I tried to break out in the SOIRA mailing list, I see using risk *models* as a way to simplify and combine statements of risks in order to compare one complex system to another. Or perhaps one complex subsystem to another. The purpose being comparison for allocation of resources. That’s a little more squishy than what I would consider a point prediction. But each individual statement of risk (risk vector) would be a single point of prediction about a slice of a complex system.
    However, since catastrophic failure in complex systems requires multiple failures we use modeling to combine our point predictions and merely approximate the failures of complex systems… with varying degrees of error and effectiveness.

  3. August 26, 2010 at 9:10 am

    So taking Chris Hayes’ preso as an example. It doesn’t matter if the ALE statements he’s using are/were correct, the fact that he’s identifying patterns based on these (false) posterior values is worth the effort?

  4. August 26, 2010 at 11:16 am

    I may be missing your point, ‘specially on what “correct” is. The detail/accuracy of the data should support the type of decision being made. If the only decision support is for macro comparison, then consistency may be enough (even incorrect consistency). But if the macro decision is to also identify the quantity of resources to apply then accuracy become more important. If the decision is at the micro level, we may not want to combine/simplify/model the TVI values and the ALE statements being correct may be more critical.

  5. August 26, 2010 at 12:04 pm

    By correct, I mean that according to our current abilities, we can actually mathematically accomplish what we say we are accomplishing (regardless of detail/accuracy). Like multiplication of ordinal values – we (infosec) do that all the time (OWASP RRM, CVSS, etc) though the results are meaningless not from an informative standpoint, but rather from a “the universe demands that you cannot do that” standpoint.

    I was looking for your “incorrect consistency” statement. I’m wondering if some amount of Garbage In is ok, there is still meaning as an outcome. Its not necessarily the “R” in the equation that is interesting, but the T, V, and I as they present patterns based on the meta-data we use to derive them (if any exists).

  6. August 26, 2010 at 12:20 pm

    Alex :

    Its not necessarily the “R” in the equation that is interesting, but the T, V, and I as they present patterns based on the meta-data we use to derive them (if any exists).

    Amen to that.
    I always go back to the decision being supported. If we’re trying to decide whether or not to wear a warm coat outside we can tolerate some “garbage in” data. But if we’re trying to decide whether or not to let the beer sit in the unheated garage, tolerance for “garbage in” data and how we process it becomes a much bigger factor.
    http://wiki.answers.com/Q/What_is_the_freezing_point_of_beer

  1. No trackbacks yet.
Comments are closed.
%d bloggers like this: