My plan is to walk through my current thinking on the complex field of risk management. No way is this going to be a short post.
Ya know how some presentations begin with “Webster’s defines <topic> as …”? Way back in time, when I started my frustrating journey with the concepts in risk management I did the same thing with “risk” for my own benefit. Try it sometime, go out and look for a definition of risk. FAIR defines it as “probable frequency and probable magnitude…”, while NIST SP800-30 defines it as “a function” (of likelihood and impact), NIST IR 7298 defines it as a “level of impact…”, ISO 27005 refers to it as a “potential” and a Microsoft Risk Management Guide defines risk as “the probability of a vulnerability being exploited”. One of the better ones I’ve seen recently comes from ISO 31000, which defined risk as the “effect of uncertainty on objectives”, me likey that one (it is the impetus for this writing).
But what the hackslap are we trying to measure here? Are we trying to measure an effect? A probability/magnitude/consequence? Short answer is a very firm and definite maybe.
Finally, after years of sleepless nights trying to slay this beast I think I’ve come far enough to put a stake in the ground. My screw-you-world-I-will-do-it-myself attitude comes through for me with this, my grand unifying definition of risk:
Risk is the uncertainty within a decision.
Where’s the likelihood you ask? I’ll get to that, but what I like about this is that it’s high level enough to share with friends. Keeping the function or formula out of the definition (which is 95% of definitions) makes it portable. This type of definition can be passed between practitioners of Octave, NIST, FAIR and others. There are two parts to my definition: the uncertainty and the decision. The term “uncertainty” doesn’t sit entirely well with me, so depending on the audience I’ll throw in “probability and uncertainty” or just leave it with “probability”.
What I mean by uncertainty is the combination of our own confidence/ability/limitations on our estimation of the probability of a series of complex and interdependent events. Quite simply, the first part of my formula includes people and mathematical models. With people representing our own limitations, biases, ingeniousness, irrationalities, fears and adaptability and the math representing the risk models and formulas most people consider risk analysis. Risk models are a just a component of the first part of my definition – they are a strong contributing component, but really they are in here to support the second part, the decision.
Factoring in the people means that information risk requires an understanding of behavioral economics, psychology, and game theory (to name a few). Because we’re not going to understand the efficacy of our own assessments, nor are we going to effectively address risk if we don’t account for the human element. While most assessments focus on the technology, meaningful change in that technology can only be influenced by people and through people – the people that thought it, created it, installed it, tested it and eventually use it and break it. We need to account for the human element otherwise we’re destined for mediocrity.
The other important consideration is the context of risk and I haven’t come across an instance of risk analysis that wasn’t performed to assist in some type of decision process. That means that we get to drag all of the decision sciences into this funball of risk. To simplify what I mean: We need to understand the influence of how we frame our problems, we need to gather the right kinds of data (largely from the uncertainty portion) and identify options. From there we need to come to some kind of conclusion, execute on it and finally (perhaps most importantly) we need feedback. We need a way to measure the influence our decisions had so that we may learn from our decisions and improve on them over time. Feedback is not (just) part of the risk model, it’s part of the entire decision process.
And now I’ll make up a formula, so it messes with people:
Risk = Decision(People(Math)) or wait, how about People(Decision(Math)). The concept is that the risk models are an input into the Decision process. I think it goes without saying that the models should never become the decision. And we cannot forget that every part of this process may be greatly influenced by the human element, from the inputs into the models, the models themselves and the execution and feedback on the decisions.
On Dasher, on Prancer
There is a huge risk within my definition of risk for paralysis. There are many other seemingly disjointed fields of study that are intertwined here, each at their own level of development and maturity. I think we’re on the right path, we’re trying things, we’re out there talking about risk and plugging in risk models, creating and adapting methodologies, we’re making decisions and every once in a while stumble on some feedback. I’m not suggesting that we stop and re-evaluate the whole thing. On the contrary, we should continue full steam ahead but with constant re-assessment and questioning. I’m not entirely optimistic that we’ll ever get to some grand unified theory of risk in my lifetime, nor am I optimistic that any one definition will necessarily stick, but that doesn’t mean I’m not going to try.
One last point, I don’t want to say that risk is only infosec-centric. Far from it. Information Security risk is but one tiny offshoot of risk and we have much to learn from (and contribute to) other areas. We need to be on a constant lookout for giants so that we can stand on their shoulders.