Archive

Archive for June, 2010

I Know Weather, And You Sir Are No Weather

June 10, 2010 5 comments

Stop for a moment and think about how much trust you have in a weather forecaster.

Our estimation of weather forecasting is influenced by a memory bias called the Von Restorff effect.  Which states simply that people remember things that stick out.   When a weather forecast is incorrect we may be unprepared or otherwise negatively impacted by the event, thus making it more memorable… the wedding that is rained out, the day at the part that was rescheduled and it turns out to be nice, etc.  When the forecast is correct, it is a non-event.  Nobody takes notice when the expected occurs.

We have this in information security as well.  We remember the events that stick out and forget the expected which skews our memory of events.  But that’s not what I want to talk about.  I want to talk about the weather.  Even though we incorrectly measure the performance of imageforecasters, predicting the weather with all available technology, is very, very difficult.  Because let’s face it, weather forecasters are often quite brilliant, they are usually highly educated and skilled (with a few exceptions of TV personalities).  Nobody else in that position would fare any better predicting the weekend weather.  Let’s see how this ties to measurements in our information systems.

Butterflies

Edward Lorenz was re-instantiating a weather modeling program in 1961 with data points from a print out.  But rather than keying in a full “.506127”, he rounded and keyed in “.506”.  The output from this iteration produced wildly different results from his previous runs.  This one event started the wheels in motion for an awesome Ashton Kutcher film years later.  Lorenz had this take away about this effect:

If, then, there is any error whatever in observing the present state – and in any real system such errors seem inevitable – an acceptable predication of the instantaneous state in the distant future may well be impossible.

I want to combine that with a definition for measurement from Douglas Hubbard:

Measurement is a set of observations that reduce uncertainty where the result is expressed as a quantity.

What’s this have to do with infosec?

Hubbard is saying that measurement is a set of observations and Lorenz is saying that any error in observation (measurement), renders an acceptable prediction as improbable.  The critical thing to note here is that Lorenz is talking about weather which is a chaotic system. Information systems are not chaotic systems (though some wiring closets imitate it well).  The point is that we should, in theory, be able to see a benefit (a reduction in uncertainty), even with error in our measurements.  In other words because we are not in a chaotic system we can deal with imperfect data.

I have a picture in my head to describe where information systems are on the continuum from simple to chaotic.  I see a ball in the center labeled “simple”, and flat rings circling it and expanding out from it.  The first ring being labeled “hard”, then “complex” and finally “chaotic” as the outer ring.   Weather systems would be in the outer ring.  For the I.T. folks, I would put an arrow pointing to the last bit of complexity right before chaos and write “you are here” on it.  Complexity is the “edge of chaos and infosec is based entirely on complex systems. Take this statement from “How Complex Systems Fail” by Richard Cook

Catastrophe is always just around the corner.
Complex systems possess potential for catastrophic failure. … The potential for catastrophic outcome is a hallmark of complex systems. It is impossible to eliminate the potential for such catastrophic failure; the potential for such failure is always present by the system’s own nature.

Go through this exercise with me: Think of the most important info system at your company (probably email service).  Think of all the layers of availability-protection piled on that to keep it operational.  Clustering, backups, hot-swap whateverables.  Would any of the technical staff in charge of it – the subject matter experts – ever be surprised if it stopped functioning?  It shouldn’t and in most cases, probably won’t, but who would be surprised?  I think everyone in this industry knows that the phone can ring at any moment with a message of catastrophe.  As Cook stated this attribute is a “hallmark of complex systems”.

Where now?

The first step in solving any problem is to realize there is a problem.

Just as it’s good to become aware of the cognitive biases we have towards high profile events (like bad weather forecasts), we also have biases and framing errors when it comes to complex technical systems.  For me, simply realizing this and writing it down is a step up from where I was yesterday.  I am beginning to grasp that there are no easy decisions – and just how much mass and momentum that statement has.

Advertisements
Categories: General Security Tags: ,

Collective Results

June 4, 2010 Comments off

I can’t speak for other participants in the Society of Information Risk Analysts (SOIRA), but I’m rather excited with anticipation.  Because of that, I wanted to jot down some ideas rolling around my head of things I want to work on.  These are no more than paragraphs on the screen at this point (though some are blog-posts-never-posted).  Each of these are intended to help the boots on the ground.  Meaning these are not theoretical exercises or just research projects, I want to be able to test some of these things out, preferably collectively and see what works and move to put things into practice.

How do improve quick security decisions?Couldn't resist including this image here

I run into multiple situations daily where someone asks me in a meeting, or worse, standing in the elevator: Can we enable feature X?  Is okay if I modify Y?  I know the practice is to do Z, but what about Z-1?  Handling these questions, approaching them methodically, and in such a way that it may improve our quick decisions with practice.  What kind of thought exercises can help with quick security decisions?  More specifically:  What can we adjust / become aware of in our existing heuristics, biases and frames to make better decisions within a 15 second window?  There is a lot of material in the decisions sciences to help with this question and hopefully something concrete can come of out it.

Here’s an example of the problem, while jotting this thought down yesterday, I was asked if a time stamp could be removed from an unprotected message.  After quickly considering the threat, weakness and impact of the question, I answered quickly and decisively with “dunno, wasn’t paying attention.”

How can we discuss weighted qualitative assessments?

I used to think that I was a unique fish battling the problem of seat-of-the-pants risk assessments that are nothing more than a weighted audit.  Jack Freund just wrote up a blog post on this qualitative battle.  I want to figure out why people think these “assessments” are valuable and work out methods to approach these discussion, because logical arguments seem to fall on deaf ears.  I’d like to figure out who’s looked at it before (like Hubbard) and what can be done to shift these perspectives in reality, primarily my goal is to just to reduce my own irritation with these.  Honestly, I want to tackle this because I’m intrigued.  The people who are doing this method and recommending these approaches are not stupid.  They see a value in their methods that I’m missing, I want to understand what that is (and why alternatives are not attractive), and then figure out where to go from there.

What are the pre-requisites for a information risk management program?

I’ve seen attempts at a risk management program fail because of immaturity of the environment.  It’s hard to track security risks when people can’t identify even the hardware in their data centers, let alone the information on them.  This makes me wonder: What should be done before a infosec risk management program is even considered?  What sort of things should people look for before they take a job in infosec risk?  Things off the top of my head are some level of maturity around asset tracking, change control and governance.

How can I establish and communicate risk tolerance?

Many methodologies talk about the risk tolerance or risk appetite of organizations and how understanding that is critical.  I think I’m more likely to win the jackpot in the Powerball before I can grok the risk tolerance in my organization.  I’d like to experiment with how this may be done.  My lofty-and-probably-absurd dream here is to create something like a questionnaire or survey that ballparks their business/infosec risk threshold.  Of the top of my head I see this being series of hypothetical situations with a finite list of decisions to select.  It should be, at least in theory, to begin correlating answers to risk threshold.  Kind of a CVSS for risk tolerances, I like this idea.

Terminology

With a nod to Chris Hayes and his risk vernacular (that may be a great starting point).  I’d like to create a reference for terminologies and try to make it broadly inclusive in order to learn from others.  For instance I see an entry for “risk” having probably 15 different definitions, along with the sources.  This may be extremely helpful for the next task, but it has overall value, too.  Having terms broken down like that will help get a feel for the overall state of infosec risk analysis and help bridge some of the gaps in terminology.   The point is not to define the terms as I want them to be, but to figure out the range of definitions other people think they are. This is simply a data gathering task, no interpretation or logic necessarily required.

Methodologies: Quick Hit List

It’s been a few years but at one point I went through every risk methodology I could get my hands on.  By the end, they blurred together and I saw an underlying pattern, each and everyone of those methods addressed almost the exact same things, just with different priorities, emphasis and well, words.  I’d like to go back down that journey again and track the similarities and differences, call out their emphasis and attempt to align and track terminologies (see previous entry).  I may try to tackle this one last, it may be a bit tedious.

“To begin with, let’s assume I’m an idiot…”

I’d like some help for people new to the field and to fill in many gaps I have in my own knowledge.  Perhaps this is an “Idiot’s Guide to Infosec Risk” but without any trademark infringement.  What kind of statistical analysis or decision analysis tools should I consider?  What are the books/membership/training/certification that will help me?  Are there courses I could look at taking?  This was mentioned by John on the SOIRA concall earlier, about where to even begin looking.  And maybe this is multiple outputs and not a single product.  I don’t think the target audience for this should be college interns, but seasoned security professionals whose finger looks like a raisin from licking it in the wind too much.

That’s it.  I know there are more, but that’s what I was able to think of during the day today.  And the last three don’t exactly target the boots on the ground, that’d be more for a longer-term goal.  If you can think of other big questions that need answering, please check into the Society of Information Risk Analysts.