Archive

Archive for the ‘Decisions’ Category

A Call to Arms: It is Time to Learn Like Experts

November 23, 2011 3 comments

I had an article published in the November issue of the ISSA journal by the same name as this blog post.  I’ve got permission to post it to a personal webpage, so it is now available here. 

The article begins with a quote:

When we take action on the basis of an [untested] belief, we destroy the chance to discover whether that belief is appropriate. – Robin M. Hogarth

That quote from his book, “Educating Intuition” and it really caught the essence of what I see as the struggles in information security.  We are making security decisions based on what we believe and then we move onto the Next Big Thing without seeking adequate feedback.  This article is an attempt to say that whatever you think of the “quant” side of information security needs to be compared to the what we have without quants – which is an intuitive approach.  What I’ve found in preparing for this article is that the environment we work in is not conducive to developing a trustworthy intuition on its own.  As a result, we have justification in challenging unaided opinion when it comes to risk-based decisions and we should be building feedback loops into our environment.

Have a read.  And by all means, feedback is not only sought, it is required.

Categories: Decisions, Psychology, Risk Tags:

Risk Analysis is a Voyage

July 15, 2011 Comments off
From the Society of Information Risk Analysts mailing list, I was introduced to the

OWASP Risk Rating Methodology.  If you haven’t read about this methodology, I highly encourage that you do.  There is a lot of material there to talk and think about.this is a model of a box

To be completely honest, my first reaction is “what the fudge-cake is this crud?”  It symbolizes most every challenge I think we face with information security risk analysis methods.  However, my pragmatic side steps in and tries to answer a simple question, “Is it helpful?”  Because the one thing I know for certain is the value of risk analysis is relative and on a continuum ranging from really harmful to really helpful.  Compared to unaided opinion, this method may provide a better result and should be leveraged.  Compared to anything else from current (non-infosec) literature and experts, this method is sucking on crayons in the corner.  But the truth is, I don’t know if this method is helpful or not.  Even if I did have an answer I’d probably be wrong since its value is relative to the other tools and resources available in any specific situation.

But here’s another reason I struggle, risk analysis isn’t easy.  I’ve been researching risk analysis methods for years now and I feel like I’m just beginning to scratch the surface – the more I learn, the more I learn I don’t know. It seems that trying to make a “one-size fits all” approach always falls short of expectations, perhaps this point is better made by David Vose:

I’ve done my best to reverse the tendency to be formulaic.  My argument is that in 19 years we have never done the same risk analysis twice: every one has its individual peculiarities.  Yet the tendency seems to be the reverse: I trained over a hundred consultants in one of the big four management consultancy firms in business risk modeling techniques, and they decided that, to ensure that they could maintain consistency, they would keep it simple and essentially fill in a template of three-point estimates with some correlation.  I can see their point – if every risk analyst developed a fancy and highly individual model it would be impossible to ensure any quality standard.  The problem is, of course, that the standard they will maintain is very low.  Risk analysis should not be a packaged commodity but a voyage of reasoned thinking leading to the best possible decision at the time.

-David Vose, “Risk Analysis: A Quantitative Guide

So here’s the question I’m thinking about, without requiring every developer or infosec practitioner to become experts in analytic techniques, how can we raise the quality of risk-informed decisions? 

Let’s think of the OWASP Risk Rating Methodology as a model, because, well, it is a model.  Next, let’s consider the famous George Box quote, “All models are wrong, but some models are useful.”  All models have to simplify reality at some level (thus never perfectly represent reality) so I don’t want to simply tear apart this risk analysis model because I can point out how it’s wrong.  Anyone with a background in statistics or analytics can point out the flaws.  What I want to understand is how useful the model is, and perhaps in doing that, we can start to determine a path to make this type of formulaic risk analysis more useful.

Risk Analysis is a voyage, let’s get going.

Categories: Decisions, Risk Tags:

Yeah But… So What?

November 8, 2010 1 comment

I’ve found several strange by-products as I’ve been evolving my risk analysis dogma.  I’ve found that I’ve been challenging the traditional security dogma a whole lot more by asking “yeah but… so what?”  I think this shift in my approach is best summed up by the first slide Jack Jones presented in FAIR training: “management doesn’t doesn’t care about security, they care about risk.”   Meaning talking in terms of vulnerabilities found or what-if cases of just bad security is largely irrelevant.  Whether we realize it or not, decision makers must translate that security message into a risk message because that’s what they care about.  And that’s where disconnect occurs – the security geeks are flailing around about bad security and the decision makers are not seeing the correlation to risk.

I feel quite fortunate that I have a guy in my leadership chain that provides instantaneous feedback on which side I’m speaking on.  His feedback is through subtle body language.  If I slip into talking about bad security, he’ll lean back or check papers in front of him, perhaps look around.  He’ll pretty much do anything except look like he cares.  Now if I start talking in terms of probabilities, loss amounts or tangible business loss scenarios his eyes are front and center.  It’s a nice feedback mechanism.

Even though the catchy phrase came from FAIR, it’s not an exclusive FAIR approach (though it lends itself beautifully to it), this is a universal perspective we need to adopt.  Even if the assessment is putting likelihood and impact on a high/medium/low scale, if the loss is not a tangible loss it’s probably projecting FUD.  Let me walk through an example:

The “What-If” Stolen Laptop

Here’s the scenario, a single-task tablet PC in a public (controlled) area.  Not very specific, but this is how it was presented to me.  The person presenting this to me was biased towards saying “no” to this new business project based on security.  So the case for “no” was laid out: it was in a region with higher than average theft rates, if stolen, a skilled attacker could bypass multiple layers of controls and gain privileged information, possibly leading to a leap-frog attack back into our own network.  

My first approach was to point out that the probability of these imageindependent events all occurring is multiplicative but that punch failed to land.  So I went with it and said, “let’s assume all that lines up… So What?”

“so they could get into critical system X”

“Okay, but so what?”

“so they could access confidential data”

“okay, but so what?” 

You can see the pattern here and where I was heading.  After quite a few rounds I had my traditional security thinker shifting his focus from thinking in terms of the security impact to business impact: costs of customer notifications, credit monitoring, etc.  Using a white board and jotting down some wild guesses we tossed out a range of really bloated, bad-case dollar figures to try and convert the event to a comparable unit.  It was fairly obvious that even if there was a loss event, our bad-case figures weren’t scary enough to run chicken little style through the halls.   But the shift we made here was to talk about this “bad” thing in terms of business risk and not bad security.

What if we stopped before putting dollar figures on it?  Let’s take credit monitoring.  If we presented that we’d have to offer credit monitoring for some quantity of customers that still requires translation into risk.  How much can we get a bulk purchase of credit monitoring for?  What is the adoption rate by customers of the offer?  Answering these questions not only gives the decision makers a better understanding of security risks but also gives the security practitioner an understanding of business.

Knife to a Gunfight

I think this is the type of thing that drives me crazy about discussing security with some “traditional” pentesters and uninitiated auditors.  The word “fail” is tossed around way to easily.  Even though it’s fun to slap “FAIL” on things, there is no fail, only more or less probable loss and weak or missing controls does not a loss event make.   The point is this, we cannot bring a knife to a gunfight.  Wait, let me restate that, we can’t bring security to a risk discussion.  We have to start asking ourselves “so what” and determining what the real loss events are and more importantly what that means to the business.  

Categories: Decisions, Risk

Improvements Lie Between Theory and Reality

October 1, 2010 5 comments

Every once in a while I come across something someone has written that really pokes my brain.  When that happens I become obsessed and I allow myself to be consumed by whatever Google and Wikipedia dish out, which ultimately will lead to whatever articles or books I can get my hands on.   The latest poking-prose is from Alex Hutton over on the Verizon Business Security Blog in a piece titled “Evidence Based Risk Management & Applied Behavioral Analysis.”  At first, I wanted to rehash what I picked up from his post, but I think I’ll talk about where I ended up with it.

To set some perspective, I want to point out that people follow some repeatable process in their decisions.  However, those decisions are often not logical or rational.  In reality there is a varying gap between what science or logic would tell us to do and what we, as heuristic beings, actually do.  Behavioral Economics, as Alex mentioned, is a field focused on observing how we make choices within an economics frame, and attempting to map out the rationale in our choices.  Most of the advances in marketing are based on this fundamental approach – figure out what sells and use it to sell.   I think accounting for human behavior is so completely under-developed in security that I’ve named this blog after it.

But just focusing on behaviors is not enough, we need context, we need a measuring stick to compare it again.  We need to know where the ideal state lies so we know how we are diverging from it.  I found a quote that introduces some new terms and summarized what I took away from Alex’s post.  It’s from Stephen J. Hoch and Howard C. Kunreuther from the Wharton School and published in “Wharton on Making Decisions.”  Within decision science (and I suspect most other sciences) there are three areas to focus the work to be done and it’s described like this:

The approach to decision making we are taking can be viewed at three different levels – what should be done based on rational theories of choice (normative models), what is actually done by individuals and groups in practice (descriptive behavior), and how we can improve decision making based on our understanding about differences between normative models and descriptive behavior (prescriptive recommendations).

From the view at my cheap seat, we stink at all three of these in infosec.  Our goal is prescriptive recommendations, we want to be able to spend just enough on security and in the right priority.  Yet our established normative models and our ability to describe behavior are lacking.   We are stuck with this “do all of these controls” advice, without reason, without priority and without context.  It just doesn’t get applied well in practice.  So let’s back and look at our models (our theory).  In order to develop better models, we need research and the feedback provided by evidence based risk management to develop what we should be doing in a perfect world (normative models).  Then we need behavioral analysis to look at what we do in reality that works or doesn’t work  (descriptive behavior).  Because we will find that how we react to and mitigate infosec risks will diverge from a logical approach if we are able to define what a logical approach is supposed to look like in the first place. 

Once we start to refine our normative models and understand the descriptive behavior, then and only then will we be able to provide prescriptive and useful recommendations.

Supporting the Decision Process

July 1, 2010 Comments off

Jack Freund inspired this post with his “Executives are Not Stupid” post.  I have climbed out of that geek-trap of thinking that decision makers were idiots.  I’ve learned that, to Jack’s point, they generally are not idiots and to the contrary are usually more skilled and successful at decision making than the average person.  But the science of making decisions is not restricted to a solo effort.  I want to break down the decision process and point out where different roles may fit in.  If someone in a technical role feels that decisions are poor, they should be aware of where they fit into the decision process and what influence they had (or didn’t have) on the decision.  Once they realize that there is a process, however informal, they may begin to influence change by establishing themselves into the appropriate step, or even challenging decision makers on their process and helping them improve.  Whether we know it or not, the goal of both business leaders and security wonks is to make good decisions.

Step 1: Establish a Frame

The first step is a critical step and often skipped over because people aren’t even aware that every decision has a frame.  This initial step is to identify and communicate the context of the decision among those involved.  Are we talking about DLP because data is leaking like a sieve?  Is there a regulatory concern?  Or is DLP “necessary” to give someone the illusion of tough choice aheadprogress?  All those warrant DLP, but depending on the context we may have three completely different decisions.   

Here’s an example, I recently was brought into a project to encrypt data in a database (which had me questioning the frame right away).  I derailed the project completely by asking what problem they were trying to solve.  Turns out the project to encrypt the data was initiated because the data was not encrypted now (seriously).   The initial frame made the mistake of assuming encrypted data was a binary option, encrypted or not.  By going back to establishing the context, we were able to better define the goal of the project and evaluate many other options that were not limited to encryption.  The initial frame limited our options in our decision, and from a technical role I was able to influence the decision process by modifying the frame.

Step 2: Gather Data and Options

This is squarely were risk analysis lies.  By discovering probabilities of events and related losses we can drive out a few options with associated costs and benefits.  Other aspects of security programs have input in this step such as metrics, breach reports, vulnerability scans, audit findings, asset valuations and even expert opinion and intuition.  It is the job of security professionals, to provide the options they see and data points that support those options.  Not doing so can starve the decision maker of relevant information and cause decisions with even more uncertainty than they already have.  It is the job of the decision maker to seek out expertise and to ask for data points thought to be relevant to be sure they have all options on the table.  One interesting thing decision makers can do is starting asking for levels of confidence in data points, having that little extra question in there provides some really interesting context for the data being gathered.  (forethought here as well: attempt to prepare for step 4 as options are discussed)

Step 3: Decide

I know this should be obvious, but the decision must be made.  One thing I haven’t called out yet is all the biases and fallacies we as people are susceptible to.  The best way to combat those are to become intimately familiar with them.  Understand the difference between an availability bias and confirmation bias.  Understand what the sunk-cost fallacy is.  Being aware of those so that they may be identified in daily activities (and they come up daily).  Because once you can identify them in reality you may be able to account for them in your decision process.

Step 4: Feedback

Just because the decision was made doesn’t mean the decision process is over.  This fourth step is probably one of the most elusive steps and I struggle to find it in security.  Once a decision is made, it is extremely beneficial to find out if it was a good decision, or if the decision had undesirable outcomes.  We may go back and re-evaluate previous systems but rarely is that performed with previous decisions in mind, let alone with feedback to the decision makers.  How can we know our DLP installation is meeting our goals?  Was it the right decision to go with vendor A over B?   Take a moment and think of feedback in recent decisions.  Get much?   If feedback was received it was probably all negative, but be sure to take it – not all failures were due to bad luck, sometimes we stink and owning that makes for better decisions in the future.

My point here is that decisions are complex, yet it is a skill that can be learned and improved on over time.  There are many moving parts and each component in the decision process can cause unique problems if glossed over or skipped altogether.  Keep in mind that these steps are scalable too.  I’ve found myself standing downtown running through these steps in my head to decide on where to grab lunch.  I’ve also thought through these steps to forecast and plan huge projects.  The amount of time and effort in each step should be proportional to the girth of the decision.

Back to Jack’s point, if you are a technical person who thinks that some executive decision qualifies them as an card-carrying member of idiots-R-us. Take some time and think through what you know of their decision process. Were you able to provide quality data to them?  What other data points did they consider?  Do you have opportunity to gather and provide feedback on that decision down the road?  Because like it or not, security decision making is a team effort and executives making routinely poor technical decisions may just need more active involvement and input from the technical staff.

Categories: Decisions, General Security Tags: