Home > General Security > I Know Weather, And You Sir Are No Weather

I Know Weather, And You Sir Are No Weather

June 10, 2010

Stop for a moment and think about how much trust you have in a weather forecaster.

Our estimation of weather forecasting is influenced by a memory bias called the Von Restorff effect.  Which states simply that people remember things that stick out.   When a weather forecast is incorrect we may be unprepared or otherwise negatively impacted by the event, thus making it more memorable… the wedding that is rained out, the day at the part that was rescheduled and it turns out to be nice, etc.  When the forecast is correct, it is a non-event.  Nobody takes notice when the expected occurs.

We have this in information security as well.  We remember the events that stick out and forget the expected which skews our memory of events.  But that’s not what I want to talk about.  I want to talk about the weather.  Even though we incorrectly measure the performance of imageforecasters, predicting the weather with all available technology, is very, very difficult.  Because let’s face it, weather forecasters are often quite brilliant, they are usually highly educated and skilled (with a few exceptions of TV personalities).  Nobody else in that position would fare any better predicting the weekend weather.  Let’s see how this ties to measurements in our information systems.

Butterflies

Edward Lorenz was re-instantiating a weather modeling program in 1961 with data points from a print out.  But rather than keying in a full “.506127”, he rounded and keyed in “.506”.  The output from this iteration produced wildly different results from his previous runs.  This one event started the wheels in motion for an awesome Ashton Kutcher film years later.  Lorenz had this take away about this effect:

If, then, there is any error whatever in observing the present state – and in any real system such errors seem inevitable – an acceptable predication of the instantaneous state in the distant future may well be impossible.

I want to combine that with a definition for measurement from Douglas Hubbard:

Measurement is a set of observations that reduce uncertainty where the result is expressed as a quantity.

What’s this have to do with infosec?

Hubbard is saying that measurement is a set of observations and Lorenz is saying that any error in observation (measurement), renders an acceptable prediction as improbable.  The critical thing to note here is that Lorenz is talking about weather which is a chaotic system. Information systems are not chaotic systems (though some wiring closets imitate it well).  The point is that we should, in theory, be able to see a benefit (a reduction in uncertainty), even with error in our measurements.  In other words because we are not in a chaotic system we can deal with imperfect data.

I have a picture in my head to describe where information systems are on the continuum from simple to chaotic.  I see a ball in the center labeled “simple”, and flat rings circling it and expanding out from it.  The first ring being labeled “hard”, then “complex” and finally “chaotic” as the outer ring.   Weather systems would be in the outer ring.  For the I.T. folks, I would put an arrow pointing to the last bit of complexity right before chaos and write “you are here” on it.  Complexity is the “edge of chaos and infosec is based entirely on complex systems. Take this statement from “How Complex Systems Fail” by Richard Cook

Catastrophe is always just around the corner.
Complex systems possess potential for catastrophic failure. … The potential for catastrophic outcome is a hallmark of complex systems. It is impossible to eliminate the potential for such catastrophic failure; the potential for such failure is always present by the system’s own nature.

Go through this exercise with me: Think of the most important info system at your company (probably email service).  Think of all the layers of availability-protection piled on that to keep it operational.  Clustering, backups, hot-swap whateverables.  Would any of the technical staff in charge of it – the subject matter experts – ever be surprised if it stopped functioning?  It shouldn’t and in most cases, probably won’t, but who would be surprised?  I think everyone in this industry knows that the phone can ring at any moment with a message of catastrophe.  As Cook stated this attribute is a “hallmark of complex systems”.

Where now?

The first step in solving any problem is to realize there is a problem.

Just as it’s good to become aware of the cognitive biases we have towards high profile events (like bad weather forecasts), we also have biases and framing errors when it comes to complex technical systems.  For me, simply realizing this and writing it down is a step up from where I was yesterday.  I am beginning to grasp that there are no easy decisions – and just how much mass and momentum that statement has.

Advertisements
Categories: General Security Tags: ,
  1. Jack
    June 11, 2010 at 7:45 am

    Hello Jay!

    Glad to be commenting on your blog again, and fellow SOIRA hellos!

    I keep hanging on your comment that an infosec system isn’t a chaotic system. I’m not a mathematician, but I took a graduate class (and a research class) in complex adaptive systems, and I’m pretty sure we could call it one, or at least it could be modeled as one. It certainly isn’t a closed system, as it has external “energy” sources. The wikipedia definition has three requirements:

    1. it must be sensitive to initial conditions,
    2. it must be topologically mixing, and
    3. its periodic orbits must be dense.

    I think 1 and 2 are true, but three would require some modeling. I think it could be though. Heck, the model would probably also show a “strange attractor.”

    Maybe someone with a better math background could help us.

    • June 12, 2010 at 10:20 am

      Hey Jack! Thanks for commenting, love the discussions!
      You have an educational advantage over me on this topic, so I won’t push back too hard. I think a better question to answer is: what’s the difference/similarities between a chaotic system and a complex (adaptive) system (and then eventually I’d like to answer the “so what” question too).

      In looking at some properties for complex systems like emergence, running in degraded state, self-organizing, progression towards more complexity, there are a lot of similarities and as best as I can tell the differences are subtle but complexity seems to like a more probable label.

      I don’t think Information Systems are sensitive to initial conditions as a chaotic system is. Every email cluster ends up around the same state given different administrators even different vendors. Also, I can predict the general state of complex I.T. systems as “degraded but operational” a month or two out. But I think this goes to my overall point. We can model risk with some degree of flexibility in the precision. I think putting a “.506″ versus a .506127” is not going to fluctuate monte carlo analysis too much.

      I also would like to challenge on topological mixing. I may be well off in my assumption, but if I picture mixing colored paint, everything “will eventually overlap with any other given region”. I could twist this and say “Could a change in a unix file server influence an MS domain controller?” Is that the topological mixing you’re thinking of? I think there is an element of mixing like that, it is not outside of the realm of possibility but I see that more as a product of complexity not chaos.

      But so what? I tried to establish the only time precise data is required is when attempting to predict activity of a chaotic system. Everything else (such as predicting activity of information systems) may be done with less-than-precise data. I think there are more lessons and corollaries to complex and chaotic systems that we can draw upon. Perhaps, whether an Information System is complex or chaotic isn’t an important distinction?

  2. Jack
    June 12, 2010 at 9:15 pm

    Hi Jay!

    I guess whether it is or isn’t depends on what you want to show.

    One of the things that I remember most about Chaos theory and Lorenz curves is that they are only “predictable” up to a certain point (which makes them chaotic), and then it just goes haywire, which is why weather forecasters will never have perfect 14-day forecasts :-).

    So if we wanted to say that this was a feature of infosec systems, then showing that it was chaotic would be helpful.

    But you’re right: all of this could be moot.

    However, assuming it isn’t 🙂

    1. Depending on how/what we are modeling, I think initial condition would be highly important. So, depending on how the system was architected could mean whether or not we have a gaping hole later that is exploited, etc. etc.

    2. Mixing? Of course it is. Various elements of these systems overlap, and probably spread over time. Some more than others. But in aggregate, sure it does.

  3. June 13, 2010 at 10:02 am

    I think there is a subtle but distinct difference between chaotic topological mixing versus the mixing in complex systems. Both have some degrees of mixing. I personally see IT systems as displaying discretionary mixing, meaning there are limits to what interacts and the interaction is more co-evolving or emergent versus topologically mixing. This makes me lean towards labeling IT systems as more complex than chaotic.
    The only real benefit in that distinction is that I’ll use keywords from complexity theory to search on versus chaotic theory, we’ll see where that leads.

  4. July 1, 2010 at 2:41 pm

    Interesting conversation. Couple of incomplete thoughts:

    1.) I like “entanglement” vs. “mixing” as a rough description of having purposeful (and non-purposeful), expected (and non-expected) relationships between assets. The reason I like that term is because when you mix, you get something *different*. Iced Tea and alcohol become a new drink together. A web server and the database server can “entangle” to create an application but it can make sense to perceive each asset independently (at least from a management & measurement perspective). Maybe it’s a semantic difference.

    2.) This would support my pet hypothesis that we’re dealing with a complex adaptive system, or rather the collision of two CAS – the attack and the defense (each of those having their own CAS influencers)

  1. No trackbacks yet.
Comments are closed.
%d bloggers like this: