Archive

Posts Tagged ‘Risk’

Risk Analysis is a Voyage

July 15, 2011 Comments off
From the Society of Information Risk Analysts mailing list, I was introduced to the

OWASP Risk Rating Methodology.  If you haven’t read about this methodology, I highly encourage that you do.  There is a lot of material there to talk and think about.this is a model of a box

To be completely honest, my first reaction is “what the fudge-cake is this crud?”  It symbolizes most every challenge I think we face with information security risk analysis methods.  However, my pragmatic side steps in and tries to answer a simple question, “Is it helpful?”  Because the one thing I know for certain is the value of risk analysis is relative and on a continuum ranging from really harmful to really helpful.  Compared to unaided opinion, this method may provide a better result and should be leveraged.  Compared to anything else from current (non-infosec) literature and experts, this method is sucking on crayons in the corner.  But the truth is, I don’t know if this method is helpful or not.  Even if I did have an answer I’d probably be wrong since its value is relative to the other tools and resources available in any specific situation.

But here’s another reason I struggle, risk analysis isn’t easy.  I’ve been researching risk analysis methods for years now and I feel like I’m just beginning to scratch the surface – the more I learn, the more I learn I don’t know. It seems that trying to make a “one-size fits all” approach always falls short of expectations, perhaps this point is better made by David Vose:

I’ve done my best to reverse the tendency to be formulaic.  My argument is that in 19 years we have never done the same risk analysis twice: every one has its individual peculiarities.  Yet the tendency seems to be the reverse: I trained over a hundred consultants in one of the big four management consultancy firms in business risk modeling techniques, and they decided that, to ensure that they could maintain consistency, they would keep it simple and essentially fill in a template of three-point estimates with some correlation.  I can see their point – if every risk analyst developed a fancy and highly individual model it would be impossible to ensure any quality standard.  The problem is, of course, that the standard they will maintain is very low.  Risk analysis should not be a packaged commodity but a voyage of reasoned thinking leading to the best possible decision at the time.

-David Vose, “Risk Analysis: A Quantitative Guide

So here’s the question I’m thinking about, without requiring every developer or infosec practitioner to become experts in analytic techniques, how can we raise the quality of risk-informed decisions? 

Let’s think of the OWASP Risk Rating Methodology as a model, because, well, it is a model.  Next, let’s consider the famous George Box quote, “All models are wrong, but some models are useful.”  All models have to simplify reality at some level (thus never perfectly represent reality) so I don’t want to simply tear apart this risk analysis model because I can point out how it’s wrong.  Anyone with a background in statistics or analytics can point out the flaws.  What I want to understand is how useful the model is, and perhaps in doing that, we can start to determine a path to make this type of formulaic risk analysis more useful.

Risk Analysis is a voyage, let’s get going.

Categories: Decisions, Risk Tags:

7 Steps to Risk Management Payout

March 27, 2011 2 comments

I was thinking about the plethora of absolutely crappy risk management methods out there and the commonalities they all end up sharing.  I thought I’d help anyone wanting to either a) develop their own internal methodology or b) get paid for telling others how to do risk management.  For them, I have created the following approach which enables people to avoid actually learning about information risk, uncertainty, decision science, cognitive biases or anything else usually important in creating and performing risk analysis.
The beauty of this approach is that it’s foundational. When people realize that it’s not actually helpful, it’s possible to build a new (looking) process by mixing up the terms, categories and breakdowns.  While avoiding learning, people can stay in their comfort zone and do the same approach over and over in new ways each time.  Everyone will be glad for the improvements until those don’t work out, then the re-inventing can occur all over again following this approach.

Here we go, the 7 steps to a risk management payout:

Step 1: Identify something to assess. 

Be it an asset, system, process or application.  This is a good area to allow permutations in future generations of this method by creating taxonomies and overly-simplified relationships between these items.

Step 2: Take a reductionist approach

Reduce the item under assessment into an incomplete list of controls from an external source.  Ignore the concept of strong emergence because it’s both too hard to explain and too hard for most anyone else to understand let alone think is real.  Note: the list of controls must be from an external source because they’re boring as all get-out to create from scratch and it gives the auditor/assessor an area to tweak in future iterations as well.  Plus, if this is ever challenged, it’s always possible to blame the external list of controls as being deficient.

Step 3: Audit Assess

Get a list of findings from the list of controls, but call them “risk items”.  In future iterations it’s possible to change up that term or even to create something called a “balanced scorecard”, doesn’t matter what that is, just make something up that looks different than previous iterations and go on.  Now it’s time for the real secret sauce.

Step 4: Categorize and Score (analyze)

Identify a list of categories on which to assess the findings and score each finding based on the category, either High/Medium/Low or 1-5 or something else completely irrelevant.   I suggest the following two top-level categories as a base because it seems to captures what everyone is thinking anyway:

  1. A score based on the worst possible case that may occur, label this “impact” or “consequence” or something.  If it’s possible to bankrupt the entire company rate it high, rate it higher if it’s possible to create a really sensational chain of events that leads up to the worst-case scenario.  It helps if people can picture it.  Keep in mind that it’s not helpful to get caught up in probability or frequency, people will think they are being tricked with pseudo-science.
  2. A score based on media coverage and label this “likelihood” or “threat”.  The more breaches in the media that can be named, the higher the score.  In this category, it helps to tie the particular finding to the breach, even if it’s entirely speculative.
Step 5: Fake the science

Multiply, add or create a look up table.  If a table is used, be sure to make it in color with scary stuff being red and remember there is no green color in risk.  If arithmetic is used, future variation could include weights or further breaking down the impact/likelihood categories. Note: Don’t get tangled up with proper math at this point, just keep making stuff up, it’s gotten us this far. 

Step 6: Think Dashboard

Create categories from the output scores.  It’s not important that it be accurate.  Just make sure the categories are described with a lot of words.  The more words that can be tossed at this section, the less likely people will be to read the whole thing, making them less likely to challenge it.  Remember not to confuse decision makers with too many data points.  After all they got to where they are because they’re all idiots, right?

Step 7: Go back and add credibility

One last step.  Go back and put acronyms into the risk management process being created.  It’s helpful to know what these acronyms mean, but don’t worry about what they represent, nobody else really knows either so nobody will challenge it.  On the off chance someone does know these, just say it was more inspirational that prescriptive.  By combining two or more of these, the process won’t have to look like any of them.  Here’s a couple of good things to cite as feeding this process:

  • ISO-31000, nobody can argue with international standards
  • COBIT or anything even loosely tied to ISACA, they’re all certified, and no, it doesn’t matter that COBIT is more governance framework
  • AS/NZ 4360:2004, just know it’s from Australia/New Zealand
  • NIST-SP800-30 and 39, use them interchangeably
  • And finally, FAIR because all the cool kids talk about it and it’s street cred
  • And there ya have it, 7 steps to a successful Risk Management Methodology.  Let me know how these work out and what else can be modified so that all future promising young risk analysis upstarts can create a risk analysis approach without being confused by having the learn new things.  The real beauty here is that people can do this simple approach with whatever irrelevant background they happen to have.  Happy risking!

Categories: Humor, Risk Tags: ,

Why Risk = Threat and Vulnerability and Impact

August 23, 2010 6 comments

Jeff Lowder wrote up a thought provoking post, "Why the “Risk = Threats x Vulnerabilities x Impact” Formula is Mathematical Nonsense” and I wanted to get my provoked thoughts into print (and hopefully out of my head).  I’m not going to disagree with Jeff for the most part.  I’ve had many-a-forehead-palming moments seeing literal interpretations of that statement.

Threats, Vulnerabilities, Impact

As most everyone in ISRA is prone to do, I want to redefine/change those terms first off and then make a point.  I’d much rather focus on the point than the terms themselves, but bear with me.  When I redefine/change those terms, I don’t think I’ll be saying anything different from Jeff but I will be making them clearer in my own head as I talk about them. 

In order for a risk to be realized, a force (threat) overcomes resistance (vulnerability) causing impact (bad things).

We are familiar with measuring forces and resistances (resistance is a force in the opposite direction) which is why we see another abused formula: Risk = Likelihood * Impact.  Because threat and vulnerability are both a force and may be easily combined into this new “likelihood” (or insert whatever term represents that concept).  And now here is the point:

For a statement of risk to have meaning the measurement of threat, resistance and impact cannot be combined nor simplified.

I’ll use acceleration as an example, acceleration is measured as the speed and direction of something over time.  There are three distinct variables that are used to convey what acceleration is.  We can not multiply speed and direction.  We cannot derive some mathematical function to simplify speed and direction into a single number. It quite simply is stated as distinct variables.  Meaning is derived by the combination of them.  The same is true with a measurement of risk, we cannot combine the threat and the resistance to it and still maintain our meaning.

For example if we have a skilled attacker applying a force to a system with considerable resistance, it is completely not the same thing as my 8-yr old running metasploit against an unpatched system.  Yet, if we attempt to combine these two scenarios we may end up with the same “likelihood” and they very clearly are different components of a risk with different methods of reducing each risk.

On Risk Models

Since any one system has multiple risks, saying that risk components cannot be combined or simplified is problematic.  Most decision makers I’ve known really likey-the-dashboard.  We want to be able to combine statements of risk with one another to create a consumable and meaningful input into the decision process.  Enter the risk model.  Since the relationships between risks and the components that make up a risk are complex, we want to do our best to estimate or simulate that combination.  If we could account for every variable and interaction we’d do so, but we can’t.  So we model reality, seek out feedback and see how it worked, then (in theory) we learn, adapt and try modeling again. 

We cannot simplify a statement of risk into a single number, but we can state the components of risks as a probability a force will overcome resistance with a probable impact.

We want to be aware of how the components of risk do or don’t interact and account for that in our risk models.  That’s where the secret sauce is.

Categories: Risk Tags: , ,

I Know Weather, And You Sir Are No Weather

June 10, 2010 5 comments

Stop for a moment and think about how much trust you have in a weather forecaster.

Our estimation of weather forecasting is influenced by a memory bias called the Von Restorff effect.  Which states simply that people remember things that stick out.   When a weather forecast is incorrect we may be unprepared or otherwise negatively impacted by the event, thus making it more memorable… the wedding that is rained out, the day at the part that was rescheduled and it turns out to be nice, etc.  When the forecast is correct, it is a non-event.  Nobody takes notice when the expected occurs.

We have this in information security as well.  We remember the events that stick out and forget the expected which skews our memory of events.  But that’s not what I want to talk about.  I want to talk about the weather.  Even though we incorrectly measure the performance of imageforecasters, predicting the weather with all available technology, is very, very difficult.  Because let’s face it, weather forecasters are often quite brilliant, they are usually highly educated and skilled (with a few exceptions of TV personalities).  Nobody else in that position would fare any better predicting the weekend weather.  Let’s see how this ties to measurements in our information systems.

Butterflies

Edward Lorenz was re-instantiating a weather modeling program in 1961 with data points from a print out.  But rather than keying in a full “.506127”, he rounded and keyed in “.506”.  The output from this iteration produced wildly different results from his previous runs.  This one event started the wheels in motion for an awesome Ashton Kutcher film years later.  Lorenz had this take away about this effect:

If, then, there is any error whatever in observing the present state – and in any real system such errors seem inevitable – an acceptable predication of the instantaneous state in the distant future may well be impossible.

I want to combine that with a definition for measurement from Douglas Hubbard:

Measurement is a set of observations that reduce uncertainty where the result is expressed as a quantity.

What’s this have to do with infosec?

Hubbard is saying that measurement is a set of observations and Lorenz is saying that any error in observation (measurement), renders an acceptable prediction as improbable.  The critical thing to note here is that Lorenz is talking about weather which is a chaotic system. Information systems are not chaotic systems (though some wiring closets imitate it well).  The point is that we should, in theory, be able to see a benefit (a reduction in uncertainty), even with error in our measurements.  In other words because we are not in a chaotic system we can deal with imperfect data.

I have a picture in my head to describe where information systems are on the continuum from simple to chaotic.  I see a ball in the center labeled “simple”, and flat rings circling it and expanding out from it.  The first ring being labeled “hard”, then “complex” and finally “chaotic” as the outer ring.   Weather systems would be in the outer ring.  For the I.T. folks, I would put an arrow pointing to the last bit of complexity right before chaos and write “you are here” on it.  Complexity is the “edge of chaos and infosec is based entirely on complex systems. Take this statement from “How Complex Systems Fail” by Richard Cook

Catastrophe is always just around the corner.
Complex systems possess potential for catastrophic failure. … The potential for catastrophic outcome is a hallmark of complex systems. It is impossible to eliminate the potential for such catastrophic failure; the potential for such failure is always present by the system’s own nature.

Go through this exercise with me: Think of the most important info system at your company (probably email service).  Think of all the layers of availability-protection piled on that to keep it operational.  Clustering, backups, hot-swap whateverables.  Would any of the technical staff in charge of it – the subject matter experts – ever be surprised if it stopped functioning?  It shouldn’t and in most cases, probably won’t, but who would be surprised?  I think everyone in this industry knows that the phone can ring at any moment with a message of catastrophe.  As Cook stated this attribute is a “hallmark of complex systems”.

Where now?

The first step in solving any problem is to realize there is a problem.

Just as it’s good to become aware of the cognitive biases we have towards high profile events (like bad weather forecasts), we also have biases and framing errors when it comes to complex technical systems.  For me, simply realizing this and writing it down is a step up from where I was yesterday.  I am beginning to grasp that there are no easy decisions – and just how much mass and momentum that statement has.

Categories: General Security Tags: ,

Collective Results

June 4, 2010 Comments off

I can’t speak for other participants in the Society of Information Risk Analysts (SOIRA), but I’m rather excited with anticipation.  Because of that, I wanted to jot down some ideas rolling around my head of things I want to work on.  These are no more than paragraphs on the screen at this point (though some are blog-posts-never-posted).  Each of these are intended to help the boots on the ground.  Meaning these are not theoretical exercises or just research projects, I want to be able to test some of these things out, preferably collectively and see what works and move to put things into practice.

How do improve quick security decisions?Couldn't resist including this image here

I run into multiple situations daily where someone asks me in a meeting, or worse, standing in the elevator: Can we enable feature X?  Is okay if I modify Y?  I know the practice is to do Z, but what about Z-1?  Handling these questions, approaching them methodically, and in such a way that it may improve our quick decisions with practice.  What kind of thought exercises can help with quick security decisions?  More specifically:  What can we adjust / become aware of in our existing heuristics, biases and frames to make better decisions within a 15 second window?  There is a lot of material in the decisions sciences to help with this question and hopefully something concrete can come of out it.

Here’s an example of the problem, while jotting this thought down yesterday, I was asked if a time stamp could be removed from an unprotected message.  After quickly considering the threat, weakness and impact of the question, I answered quickly and decisively with “dunno, wasn’t paying attention.”

How can we discuss weighted qualitative assessments?

I used to think that I was a unique fish battling the problem of seat-of-the-pants risk assessments that are nothing more than a weighted audit.  Jack Freund just wrote up a blog post on this qualitative battle.  I want to figure out why people think these “assessments” are valuable and work out methods to approach these discussion, because logical arguments seem to fall on deaf ears.  I’d like to figure out who’s looked at it before (like Hubbard) and what can be done to shift these perspectives in reality, primarily my goal is to just to reduce my own irritation with these.  Honestly, I want to tackle this because I’m intrigued.  The people who are doing this method and recommending these approaches are not stupid.  They see a value in their methods that I’m missing, I want to understand what that is (and why alternatives are not attractive), and then figure out where to go from there.

What are the pre-requisites for a information risk management program?

I’ve seen attempts at a risk management program fail because of immaturity of the environment.  It’s hard to track security risks when people can’t identify even the hardware in their data centers, let alone the information on them.  This makes me wonder: What should be done before a infosec risk management program is even considered?  What sort of things should people look for before they take a job in infosec risk?  Things off the top of my head are some level of maturity around asset tracking, change control and governance.

How can I establish and communicate risk tolerance?

Many methodologies talk about the risk tolerance or risk appetite of organizations and how understanding that is critical.  I think I’m more likely to win the jackpot in the Powerball before I can grok the risk tolerance in my organization.  I’d like to experiment with how this may be done.  My lofty-and-probably-absurd dream here is to create something like a questionnaire or survey that ballparks their business/infosec risk threshold.  Of the top of my head I see this being series of hypothetical situations with a finite list of decisions to select.  It should be, at least in theory, to begin correlating answers to risk threshold.  Kind of a CVSS for risk tolerances, I like this idea.

Terminology

With a nod to Chris Hayes and his risk vernacular (that may be a great starting point).  I’d like to create a reference for terminologies and try to make it broadly inclusive in order to learn from others.  For instance I see an entry for “risk” having probably 15 different definitions, along with the sources.  This may be extremely helpful for the next task, but it has overall value, too.  Having terms broken down like that will help get a feel for the overall state of infosec risk analysis and help bridge some of the gaps in terminology.   The point is not to define the terms as I want them to be, but to figure out the range of definitions other people think they are. This is simply a data gathering task, no interpretation or logic necessarily required.

Methodologies: Quick Hit List

It’s been a few years but at one point I went through every risk methodology I could get my hands on.  By the end, they blurred together and I saw an underlying pattern, each and everyone of those methods addressed almost the exact same things, just with different priorities, emphasis and well, words.  I’d like to go back down that journey again and track the similarities and differences, call out their emphasis and attempt to align and track terminologies (see previous entry).  I may try to tackle this one last, it may be a bit tedious.

“To begin with, let’s assume I’m an idiot…”

I’d like some help for people new to the field and to fill in many gaps I have in my own knowledge.  Perhaps this is an “Idiot’s Guide to Infosec Risk” but without any trademark infringement.  What kind of statistical analysis or decision analysis tools should I consider?  What are the books/membership/training/certification that will help me?  Are there courses I could look at taking?  This was mentioned by John on the SOIRA concall earlier, about where to even begin looking.  And maybe this is multiple outputs and not a single product.  I don’t think the target audience for this should be college interns, but seasoned security professionals whose finger looks like a raisin from licking it in the wind too much.

That’s it.  I know there are more, but that’s what I was able to think of during the day today.  And the last three don’t exactly target the boots on the ground, that’d be more for a longer-term goal.  If you can think of other big questions that need answering, please check into the Society of Information Risk Analysts.

The Grand Unifying Definition of Risk

May 20, 2010 10 comments

My plan is to walk through my current thinking on the complex field of risk management.  No way is this going to be a short post.

Ya know how some presentations begin with “Webster’s defines <topic> as …”?  Way back in time, when I started my frustrating journey with the concepts in risk management I did the same thing with “risk” for my own benefit.  Try it sometime, go out and look for a definition of risk.  FAIR defines it as “probable frequency and probable magnitude…”, while NIST SP800-30 defines it as “a function” (of likelihood and impact), NIST IR 7298 defines it as a “level of impact…”, ISO 27005 refers to it as a “potential” and a Microsoft Risk Management Guide defines risk as “the probability of a vulnerability being exploited”.  One of the better ones I’ve seen recently comes from ISO 31000, which defined risk as the “effect of uncertainty on objectives”, me likey that one (it is the impetus for this writing).

But what the hackslap are we trying to measure here?  Are we trying to measure an effect? A probability/magnitude/consequence?  Short answer is a very firm and definite maybe.

Finally, after years of sleepless nights trying to slay this beast I think I’ve come far enough to put a stake in the ground.  My screw-you-world-I-will-do-it-myself attitude comes through for me with this, my grand unifying definition of risk:

Risk is the uncertainty within a decision.

Where’s the likelihood you ask?  I’ll get to that, but what I like about this is that it’s high level enough to share with friends.  Keeping the function or formula out of the definition (which is 95% of definitions) makes it portable.  This type of definition can be passed between practitioners of Octave, NIST, FAIR and others.   There are two parts to my definition: the uncertainty and the decision.  The term “uncertainty” doesn’t sit entirely well with me, so depending on the audience I’ll throw in “probability and uncertainty” or just leave it with “probability”. 

On Uncertainty

What I mean by uncertainty is the combination of our own confidence/ability/limitations on our estimation of the probability of a series of complex and interdependent events.  Quite simply, the first part of my formula includes people and mathematical models.  With people representing our own limitations, biases, ingeniousness, irrationalities, fears and adaptability and the math representing the risk models and formulas most people consider risk analysis.  Risk models are a just a component of the first part of my definition – they are a strong contributing component, but really they are in here to support the second part, the decision.

Factoring in the people means that information risk requires an understanding of behavioral economics, psychology, and game theory (to name a few).  Because we’re not going to understand the efficacy of our own assessments, nor are we going to effectively address risk if we don’t account for the human element.  While most assessments focus on the technology, meaningful change in that technology can only be influenced by people and through people – the people that thought it, created it, installed it, tested it and eventually use it and break it.  We need to account for the human element otherwise we’re destined for mediocrity.

On Decisions

The other important consideration is the context of risk and I haven’t come across an instance of risk analysis that wasn’t performed to assist in some type of decision process.  That means that we get to drag all of the decision sciences into this funball of risk.  To simplify what I mean: We need to understand the influence of how we frame our problems, we need to gather the right kinds of data (largely from the uncertainty portion) and identify options.  From there we need to come to some kind of conclusion, execute on it and finally (perhaps most importantly) we need feedback. We need a way to measure the influence our decisions had so that we may learn from our decisions and improve on them over time. Feedback is not (just) part of the risk model, it’s part of the entire decision process. 

And now I’ll make up a formula, so it messes with people:

Risk = Decision(People(Math)) or wait, how about People(Decision(Math)).  The concept is that the risk models are an input into the Decision process.  I think it goes without saying that the models should never become the decision.  And we cannot forget that every part of this process may be greatly influenced by the human element, from the inputs into the models, the models themselves and the execution and feedback on the decisions.

On Dasher, on Prancer

There is a huge risk within my definition of risk for paralysis.  There are many other seemingly disjointed fields of study that are intertwined here, each at their own level of development and maturity.  I think we’re on the right path, we’re trying things, we’re out there talking about risk and plugging in risk models, creating and adapting methodologies, we’re making decisions and every once in a while stumble on some feedback.  I’m not suggesting that we stop and re-evaluate the whole thing.  On the contrary, we should continue full steam ahead but with constant re-assessment and questioning.  I’m not entirely optimistic that we’ll ever get to some grand unified theory of risk in my lifetime, nor am I optimistic that any one definition will necessarily stick, but that doesn’t mean I’m not going to try.

One last point, I don’t want to say that risk is only infosec-centric.  Far from it.  Information Security risk is but one tiny offshoot of risk and we have much to learn from (and contribute to) other areas.  We need to be on a constant lookout for giants so that we can stand on their shoulders.

Categories: Risk Tags: ,