There are about as many definitions of risk as people you can ask and I’ve spent far too much energy pursuing this elusive definition but I think I can say, I’ve reached a good place. After all my reading, pontifications and discussions I feel that I am ready to answer the deceptively simple question “how do you define risk?” with this very simple answer:
I don’t know.
Oh I can toss things out there like “the probable frequency and probable magnitude of future loss” from the FAIR methodology. I could also wax philosophically about how I *mostly* agree with Douglas Hubbard’s well developed definition of “A state of uncertainty where some of the possibilities involve a loss” (note: I *mostly* agree just to pretend that I know something Mr. Hubbard doesn’t).
But if I don’t know, how can I say that I’ve reached a good place pursuing a risk definition? Because I have accepted the ambiguity and I’ve realized that terminology and definitions exist simply to help communicate concepts or ideas. That’s where we should be spending our efforts, behind the definitions. In that light, I have come to believe that definitions don’t have to be 100% right, they simply have to be helpful. Take the definition of risk from ISO 31000: “the effect of uncertainty on objectives”. That sounds cool, even after thinking about it for a while, but when it comes to being helpful? Nope, not even close. I may have an objective of defining risk and I’m immersed in uncertainty but I wouldn’t call the effect of that uncertainty “risk”. If anything, that definition leaves me more confused than when I started.
There’s some good news though, problems in defining central terms isn’t unique to risk. Take this from Melanie Mitchell:
In 2004 I organized a panel discussion on complexity at the Santa Fe Institute’s annual Complex Systems Summer School. It was a special year: 2004 marked the twentieth anniversary of the founding of the institute. The panel consisted of some of the most prominent members of the SFI faculty…all well-known scientists in fields such as physics, computer science, biology, economics and decision theory. The students at the school…were given the opportunity to ask any question of the panel. The first question was, “How do you define complexity?” Everyone on the panel laughed, because the question was at once so straightforward, so expected, and yet so difficult to answer.
She goes on in her book to say “Isaac Newton did not have a good definition of force” and “geneticists still do not agree on precisely what the term gene refers to at the molecular level.”
I take comfort in these stories, we are not unique, we are not alone.
As we move forward in the pursuit of information risk, let’s stay focused on where the real work should be done: measuring and communicating risk. Let’s put a little less effort on defining it just yet. Don’t’ get me wrong, definitions are helpful, but let’s not get all wrapped up in the precision of words when we’re still struggling with the concepts they are describing.
Jeff Lowder wrote up a thought provoking post, "Why the “Risk = Threats x Vulnerabilities x Impact” Formula is Mathematical Nonsense” and I wanted to get my provoked thoughts into print (and hopefully out of my head). I’m not going to disagree with Jeff for the most part. I’ve had many-a-forehead-palming moments seeing literal interpretations of that statement.
Threats, Vulnerabilities, Impact
As most everyone in ISRA is prone to do, I want to redefine/change those terms first off and then make a point. I’d much rather focus on the point than the terms themselves, but bear with me. When I redefine/change those terms, I don’t think I’ll be saying anything different from Jeff but I will be making them clearer in my own head as I talk about them.
In order for a risk to be realized, a force (threat) overcomes resistance (vulnerability) causing impact (bad things).
We are familiar with measuring forces and resistances (resistance is a force in the opposite direction) which is why we see another abused formula: Risk = Likelihood * Impact. Because threat and vulnerability are both a force and may be easily combined into this new “likelihood” (or insert whatever term represents that concept). And now here is the point:
For a statement of risk to have meaning the measurement of threat, resistance and impact cannot be combined nor simplified.
I’ll use acceleration as an example, acceleration is measured as the speed and direction of something over time. There are three distinct variables that are used to convey what acceleration is. We can not multiply speed and direction. We cannot derive some mathematical function to simplify speed and direction into a single number. It quite simply is stated as distinct variables. Meaning is derived by the combination of them. The same is true with a measurement of risk, we cannot combine the threat and the resistance to it and still maintain our meaning.
For example if we have a skilled attacker applying a force to a system with considerable resistance, it is completely not the same thing as my 8-yr old running metasploit against an unpatched system. Yet, if we attempt to combine these two scenarios we may end up with the same “likelihood” and they very clearly are different components of a risk with different methods of reducing each risk.
On Risk Models
Since any one system has multiple risks, saying that risk components cannot be combined or simplified is problematic. Most decision makers I’ve known really likey-the-dashboard. We want to be able to combine statements of risk with one another to create a consumable and meaningful input into the decision process. Enter the risk model. Since the relationships between risks and the components that make up a risk are complex, we want to do our best to estimate or simulate that combination. If we could account for every variable and interaction we’d do so, but we can’t. So we model reality, seek out feedback and see how it worked, then (in theory) we learn, adapt and try modeling again.
We cannot simplify a statement of risk into a single number, but we can state the components of risks as a probability a force will overcome resistance with a probable impact.
We want to be aware of how the components of risk do or don’t interact and account for that in our risk models. That’s where the secret sauce is.