Home > Risk > The Grand Unifying Definition of Risk

The Grand Unifying Definition of Risk

May 20, 2010

My plan is to walk through my current thinking on the complex field of risk management.  No way is this going to be a short post.

Ya know how some presentations begin with “Webster’s defines <topic> as …”?  Way back in time, when I started my frustrating journey with the concepts in risk management I did the same thing with “risk” for my own benefit.  Try it sometime, go out and look for a definition of risk.  FAIR defines it as “probable frequency and probable magnitude…”, while NIST SP800-30 defines it as “a function” (of likelihood and impact), NIST IR 7298 defines it as a “level of impact…”, ISO 27005 refers to it as a “potential” and a Microsoft Risk Management Guide defines risk as “the probability of a vulnerability being exploited”.  One of the better ones I’ve seen recently comes from ISO 31000, which defined risk as the “effect of uncertainty on objectives”, me likey that one (it is the impetus for this writing).

But what the hackslap are we trying to measure here?  Are we trying to measure an effect? A probability/magnitude/consequence?  Short answer is a very firm and definite maybe.

Finally, after years of sleepless nights trying to slay this beast I think I’ve come far enough to put a stake in the ground.  My screw-you-world-I-will-do-it-myself attitude comes through for me with this, my grand unifying definition of risk:

Risk is the uncertainty within a decision.

Where’s the likelihood you ask?  I’ll get to that, but what I like about this is that it’s high level enough to share with friends.  Keeping the function or formula out of the definition (which is 95% of definitions) makes it portable.  This type of definition can be passed between practitioners of Octave, NIST, FAIR and others.   There are two parts to my definition: the uncertainty and the decision.  The term “uncertainty” doesn’t sit entirely well with me, so depending on the audience I’ll throw in “probability and uncertainty” or just leave it with “probability”. 

On Uncertainty

What I mean by uncertainty is the combination of our own confidence/ability/limitations on our estimation of the probability of a series of complex and interdependent events.  Quite simply, the first part of my formula includes people and mathematical models.  With people representing our own limitations, biases, ingeniousness, irrationalities, fears and adaptability and the math representing the risk models and formulas most people consider risk analysis.  Risk models are a just a component of the first part of my definition – they are a strong contributing component, but really they are in here to support the second part, the decision.

Factoring in the people means that information risk requires an understanding of behavioral economics, psychology, and game theory (to name a few).  Because we’re not going to understand the efficacy of our own assessments, nor are we going to effectively address risk if we don’t account for the human element.  While most assessments focus on the technology, meaningful change in that technology can only be influenced by people and through people – the people that thought it, created it, installed it, tested it and eventually use it and break it.  We need to account for the human element otherwise we’re destined for mediocrity.

On Decisions

The other important consideration is the context of risk and I haven’t come across an instance of risk analysis that wasn’t performed to assist in some type of decision process.  That means that we get to drag all of the decision sciences into this funball of risk.  To simplify what I mean: We need to understand the influence of how we frame our problems, we need to gather the right kinds of data (largely from the uncertainty portion) and identify options.  From there we need to come to some kind of conclusion, execute on it and finally (perhaps most importantly) we need feedback. We need a way to measure the influence our decisions had so that we may learn from our decisions and improve on them over time. Feedback is not (just) part of the risk model, it’s part of the entire decision process. 

And now I’ll make up a formula, so it messes with people:

Risk = Decision(People(Math)) or wait, how about People(Decision(Math)).  The concept is that the risk models are an input into the Decision process.  I think it goes without saying that the models should never become the decision.  And we cannot forget that every part of this process may be greatly influenced by the human element, from the inputs into the models, the models themselves and the execution and feedback on the decisions.

On Dasher, on Prancer

There is a huge risk within my definition of risk for paralysis.  There are many other seemingly disjointed fields of study that are intertwined here, each at their own level of development and maturity.  I think we’re on the right path, we’re trying things, we’re out there talking about risk and plugging in risk models, creating and adapting methodologies, we’re making decisions and every once in a while stumble on some feedback.  I’m not suggesting that we stop and re-evaluate the whole thing.  On the contrary, we should continue full steam ahead but with constant re-assessment and questioning.  I’m not entirely optimistic that we’ll ever get to some grand unified theory of risk in my lifetime, nor am I optimistic that any one definition will necessarily stick, but that doesn’t mean I’m not going to try.

One last point, I don’t want to say that risk is only infosec-centric.  Far from it.  Information Security risk is but one tiny offshoot of risk and we have much to learn from (and contribute to) other areas.  We need to be on a constant lookout for giants so that we can stand on their shoulders.

Advertisements
Categories: Risk Tags: ,
  1. Jack
    May 22, 2010 at 8:58 pm

    Hi Jay! Great site!

    You may want to check into what has already been done to slay this beast:

    http://www.riskmanagementinsight.com/media/documents/FAIR_Introduction.pdf

  2. May 23, 2010 at 9:38 am

    @Jack
    I have a fundamental problem with FAIR which is that I can’t implement it. There is enough material available to make me think it’s good but not enough material available to actually know it is. The introduction doc you linked explains FAIR but I can’t turn around and implement FAIR — or even attempt to do so. That makes it a good theory and nothing more to me at this point.
    Plus, this post is trying to explain that managing risk is more than any one analysis method, that analysis is simply one part of making decisions. There are other fields of research that help our learning and growth towards making better decisions involving our information risks.

  3. Jack
    May 23, 2010 at 3:29 pm

    Hi Jay!

    It’s good you are familiar with FAIR. You don’t have to implement it: that’s already been done. I use the FAIRlite spreadsheet analysis tool all the time. But, even if you didn’t want to acquire it from Jack Jones, you could still program your own Monte Carlo functions in Excel utilizing the taxonomy in the whitepaper. Qualitative methods I imagine seem more “implementable” because there’s nothing to do: just throw your biases and feelings into the analysis and blamo! your done. Quantitative takes some more effort.

    FAIR has also benefited from a lot of decision analysis theory in the past few years. Most notably is the use of Applied Information Economics (AIE) from Douglas Hubbard. Advanced FAIR analysis does variability analysis and decision making analysis as well.

    In any case your point about other fields is well taken and on point: InfoSec risk isn’t unique. This has all been done before. My point is nothing more than that: somebody has done a lot of this legwork already in FAIR.

  4. May 23, 2010 at 9:16 pm

    Jack… Awesome, Awesome, Awesome.
    So why is it that I have to start a blog to find this out? Why is there nobody in my immediate circle of infosec/risk wonks talking about this? (rhetorical questions)

    I am more excited then ever to be in this field and I’ve got two questions:
    — How can a slacker like me learn as much as I can and adopt this methodology in daily activities?
    — How do I take this back to the other people I know and work with?

    Again, these are largely rhetorical questions and I’m already pursuing the first. The second is much more about corporate culture and maturity levels than a logical sell of calibrated estimators.

    I love this quote by the way:
    “Qualitative methods I imagine seem more “implementable” because there’s nothing to do: just throw your biases and feelings into the analysis and blamo! your done.”

    That’s gotta be the best use of “blamo” I’ve seen in a while…

  5. Jack
    May 23, 2010 at 9:45 pm

    I’m glad you enjoyed my quote 🙂 And your circle is now larger since we’ve done the social media linkup things 🙂

    Your second question is difficult. It takes a lot of patience and constant effort to push back against “best practices” and the “this is always how we’ve done its.”

    In the past four weeks I’ve had to rebuff a “risk assessment” that was really a vulnerability assessment (they didn’t talk with anybody or look at anything. Just a big list of “possibles”), debunk a policy that used Attack Trees as a risk assessment methodology, suggest that an InfoSec group break out of IT and talk to the operational risk group, and go to the mat on why ordinal scales where a bad tool for risk. The room always goes quiet when I’m done for what I can only imagine is an “Emperor has No Clothes” moment.

    I heard a great quote that I thought I’d share: All measurements have error, but your qualitative measure of my quantitative one has even more error.

    This was Douglas Hubbard in a webinar I watched. His books “How to Measure Anything, and the “Failure of Risk Management” are great places to start.

  6. May 25, 2010 at 10:43 am

    For majority of systems, risk is the permanent impairment of capital. Uncertainty does not have much to do with this, and most people in risk management confuse and conflate the two. You cannot really manage uncertainty, but you can manage risk. As usual the best thinking on this comes from outside of infosec

    http://1raindrop.typepad.com/1_raindrop/2007/11/dhandho-infosec.html

    and

    http://www.simoleonsense.com/miguel-barbosa-interviews-james-montier-part-1-value-investing-tools-techniques-for-intelligent-investing/

    James Montier: Sure. Modern risk management is a farce; it is pseudoscience of the worst kind. The idea that the risk of an investment, or indeed, a portfolio of investments can be reduced to a single number is utter madness. In essence, the problem with risk management is that is assumes that volatility equals risk. Nothing could be further from the truth. Volatility creates opportunity. For instance, was the stock market more risky in 2007 or 2009? According to views of risk managers, 2007 was the less risky year, it had low volatility, which they happily fed into their risk models and concluded (falsely) that the world was a safe place to take risk. In contrast, these very same risk managers were staying that the world was exceptionally risky in 2009, and that one should be cutting back on risk. This is, of course, the complete opposite to what one should have been doing. In 2007, the evidence of a housing/credit bubble was plain to see, this suggested risk, valuations were high, it was time to scale back exposure. In 2009, bargains abounded, this was the perfect time to take ‘risk’ on, not to run away. Risk managers are the sorts of fellows that lend out umbrellas on fine days, and ask for them back when it starts to rain.

  7. May 25, 2010 at 9:21 pm

    Gunnar — You’re mostly right, shame on me for trying to redefine a common-use term like “uncertainty”. My intention was to select a word that represented “the combination of our own confidence/ability/limitations on our estimation of the probability of a series of complex and interdependent events” and we could tack on “that results in a permanent impairment of capitol” (which I think is a fancy way to say “loss”).
    The main point I’m trying to make is that risk is more a decision process than just a set of models and graphs.
    Question though, If we look at a group of people who appear to be more risk-evolved than others, and have some science/skill around managing risk (claim more than intuition-based risk decisions), then they should have a measurable property: the skills to manage risk should be transferable. That is, any above average financial analyst/investor/firm should spawn above average apprentices consistently. Do we see that? Do the rock stars of finance spawn better than average protege’s? (I actually don’t know the answer, just thinking that would be something to warrant belief)

  8. May 26, 2010 at 7:28 am

    Jay – one other mistake people frequently make is they use the word risk without a qualifier.

    Here is Marty Whitman’s “Distress Investing’ on why no one should ever use the word risk without an adjective

    ‘Risk is not a meaningful concept unless modified by an adjective. There exist market risk, investment risk, Chapter 11 reorganization risk, credit risk, failure to match maturities risk, hurricane risk, terrorism risk, and so forth; but it not really useful to look at general risk. When risk is discussed in conventional academic finance, the subject is almost always market risk (i.e. fluctuations in market prices). Beta, alpha, and the capital asset pricing model (CAPM) are based on market prices. We ignore market risk and focus on investment risk, especially in distress investing (i.e. the probabilities of something going wrong with the company and/or the securities issued by the company).
    For us there is no risk-reward ratio. A risk-reward ratio exists where price is in equilibrium. In that instance, risk and reward for securities are measured by two variables:

    1. Quality of the issuer.
    2. Terms of the issue.

    The higher the quality and the more senior the terms, the less the risk and the smaller the potential for gain. Introducing price turns the risk-reward ratio on its head. The lower the price, the less the risk of loss and the greater the prospect for gain.’

    So in infosec we have project risk, financial risk, availability risk, throughput risk, reputation risk, and so on. One of my goals for 2010 is to exorcise the use of the word risk (without an appropriate qualifier) from the infosec profession

  9. May 26, 2010 at 8:30 am

    Gunnar – Great point. I’ll do what I can to help you to reach your goal for 2010 and minimize posts like “four myths of risk” and in the future say “four myths of infosec risk” or some other appropriate qualifier. Unfortunately, I have to start out the next paragraph with “But…”
    But there are some universal attributes of risk that apply to all the qualifiers. I think they are all within the context of a decision process, whether a person is considering how to invest millions, have one more piece of cake or enable plaintext FTP across the internet — they are all seeking ways to maximize reward and minimize loss. However benefit of talking about unqualified risk mostly stops there, to your point, in order to talk about risk in any meaningful way there does need to be a qualifier in order to create the context.

  1. May 25, 2010 at 11:23 pm
Comments are closed.
%d bloggers like this: