Archive

Posts Tagged ‘Pontifications’

What are we fighting for?

December 14, 2010 Comments off

I’ve written about this topic at least a half dozen times now, I’ve saved each one as a draft and I’m giving up – I’m asking for help.  I was inspired to do this by a video “Where Good Ideas Come From” (Bob Blakely posted the link in twitter).  I can’t find the answer to this puzzle, at least not in any meaningful way, not by myself.  I’ll break down where I am in the thought process and hope that I get some feedback. (note: For the purpose of this discussion I’m using “security” as a group of people and technology intended to protect assets.)

Business

The goal of business is pretty well understood.  For-profit companies are after profit and all the things that would affect that, reputation and customer confidence being on the top of the list for information security.  From a business perspective, what I think is considered a successful security program is spending just enough on security and not too much.  Spending too much on security should not be considered a success as well as failed security (though not equally).  The goal isn’t perfect security, the goal is managed security.  There is a point of diminishing returns in that spending, at some point there is just enough security.

I think of a production line manufacturing some physical widget.  While it’d be really cool to have zero defects, most businesses spend just enough to make the product defects within some tolerance level.  Translating to infosec, the goal from a business perspective is to spend enough (on security) to meet some level of business risk tolerance.  That opens up a whole different discussion that I’ll avoid for now.  But my point is that there should be a holistic view to information security.  Since the goal of protecting information is only one variable to reach the goal of being profitable – there could easily be a good decision to increase spending and training for public relations staff to respond to any breach rather than preventing a specific subset of breaches themselves.  Having the goal in mind enables those types of flexible trade offs.

InfoSec

Most every infosec talk I go to the goal appears to be security for the sake of security.  In other words, the goal is to have not have security fail.  The result is that the focus is shifted onto prevention and statements of risk stop short of being meaningful.   “If X and Y happen an attacker will have an account on host Z.” is a statement on security, not risk.  It’s a statement of action with impact to security not an impact to the broader goal.  This type of focus devalues detective controls in the overall risk/value statement (everyone creates a mental measurement of risk/value in their own head).  Once a detective control like logging is triggered in a breach, the security breach has occurred.  The gap is in that the real reason we’re fighting—the bigger picture—the goal, hasn’t yet been impacted.  However, and this is important, because the risk is perceived from a security perspective, emphasis and priorities are often misplaced.  Hence, the question in the title.  I don’t think we should be fighting for good security, we should be fighting for good-enough security. 

Government

I think this may be a special case where the goal is in fact security, but I have very little experience here.  I won’t waste time pontificating on the goals for government.  But this type of thing factors into the discussion.  If infosec in government has a different goal then private enterprises, where are the differences and similarities?

Compliance

The simple statement of “Compliance != Security” implies that the goal is security.  What are we fighting for?  It becomes pretty clear why some of the compliant yet “bad” security decisions were made if we consider that the goal wasn’t security.  Compliance is a business concern, the correlation to infosec is both a blessing and curse.

Where am I heading?

So I’m seeing two major gaps as I type this.  First thing is I don’t think there is any type of consensus around what our goal is in information security.  My current thought is that perfect security is not the goal and that security is just a means to some other end.  I think we should be focusing on where that end is and how we define “just enough” security in order to meet that.  But please, help me understand that.

Second thing is the problem this causes, the “so what” of this post.  We lack the ability to communicate security and consequently risk because we’re talking apples and oranges.  I’ve been there, I’ve laid out a clear and logical case why some security thingamabob would improve security only to get some lame answer as to why I was shot down.  I get that now.  I wasn’t headed in the same direction as others in the conversation.  The solution went towards my goal of security, not our goal of business.  Once we’re all headed towards the same goals we can align assumptions and start to have more productive discussions.

For those who’ve watched that video I linked to in the opening, I’ve got half an idea.  It’s been percolating for a long time and I can’t seem to find the trigger that unifies this mess.  I’m putting this out there to hopefully trigger a response – a “here’s what I’m fighting for” type response.  Because I think we’ve been heading in a dangerous direction focusing on security for the sake of security.

Top 5 Rules to Live By (for Infosec)

December 5, 2010 2 comments

With the holidays upon us and all that happy-good-cheer crap going around, I thought I would try it and see if I couldn’t give back a little.  Perhaps I could even spark a little introspection as we look toward the new year.  Throughout the years I’ve picked up many little pearls of wisdom, and for those I haven’t forgotten, I’ve compiled them into my top 5 rules to live by (for infosec). 

Rule 1: Don’t order steak in a burger joint.rules schmulez

This is always my number 1 rule and comes via my father growing up.  Knowing how to adjust expectations is critical.  Being aware of the surroundings and everyone’s capabilities is important.  The steak reference is easy to picture and identify with, but this manifests itself daily and much more subtly.  A stone castle can’t be built out of sand, and a problem can’t be solved if people don’t see it.  It’s amazing and a little scary to realize how many mediocre burger joints there are. 

Rule 2: Assume the hired help may actually want to help

Once there is awareness about the environment, understand that people generally want to do the right thing. This is a hard thing to accept in infosec because the job is full of people making bad decisions and it’s easy to make fun of “stupid” people and mentally stamp a FAIL on their forehead.  But I found if I write off someone as incompetent I also write off the ability to learn from them.  Once I made this mental shift I was surprised at how smart people can be and how much I can learn from others – especially in their moments of failures.  Plus most problems have a more interesting root cause then negligence, if we can look for it.

Rule 3: Whatever you are thinking of doing it’s probably been done before, been done better, by someone smarter, and there is a book about it.

…or “Read early, read often.”  This is critical to improving and adapting.  Even if it hasn’t been done directly, then someone has done something similar, perhaps in some other field.  Find out, look around, ask questions, talk to co-workers, neighbors, kids and pets.  Sometimes finding things to imitate can come from weird places.  If none of that works, it’s always possible to think up security analogies that involve a home, perhaps a car.  (Note: please refrain from disclosing home/car analogies publicly, unless it’s for a comment on Schneier’s blog)

Rule 4: Don’t be afraid to look dumb.

Answering “I don’t know” is not only appropriate, it’s necessary.  Get out on that dance floor and shake it like it you mean it.  Because hey, anyone can look good doing the robot if they commit to it.

Rule 5: Find someone to mock you.

This is invaluable.  Whether we realize it not, infosec is a nascent field.  It’s relatively easy to look like a rock star, but detrimental to believe it.  Having someone around to bring up Rule #3 (repeatedly) is very important because it removes complacency.  There is always room for improvement.

So there we have it, the top 5 rules to live by (for infosec).  I would be interested to know what rules others come back to.  If anyone has some send them my way, because rule 3 does apply to lists of rules to live by.

I Know Weather, And You Sir Are No Weather

June 10, 2010 5 comments

Stop for a moment and think about how much trust you have in a weather forecaster.

Our estimation of weather forecasting is influenced by a memory bias called the Von Restorff effect.  Which states simply that people remember things that stick out.   When a weather forecast is incorrect we may be unprepared or otherwise negatively impacted by the event, thus making it more memorable… the wedding that is rained out, the day at the part that was rescheduled and it turns out to be nice, etc.  When the forecast is correct, it is a non-event.  Nobody takes notice when the expected occurs.

We have this in information security as well.  We remember the events that stick out and forget the expected which skews our memory of events.  But that’s not what I want to talk about.  I want to talk about the weather.  Even though we incorrectly measure the performance of imageforecasters, predicting the weather with all available technology, is very, very difficult.  Because let’s face it, weather forecasters are often quite brilliant, they are usually highly educated and skilled (with a few exceptions of TV personalities).  Nobody else in that position would fare any better predicting the weekend weather.  Let’s see how this ties to measurements in our information systems.

Butterflies

Edward Lorenz was re-instantiating a weather modeling program in 1961 with data points from a print out.  But rather than keying in a full “.506127”, he rounded and keyed in “.506”.  The output from this iteration produced wildly different results from his previous runs.  This one event started the wheels in motion for an awesome Ashton Kutcher film years later.  Lorenz had this take away about this effect:

If, then, there is any error whatever in observing the present state – and in any real system such errors seem inevitable – an acceptable predication of the instantaneous state in the distant future may well be impossible.

I want to combine that with a definition for measurement from Douglas Hubbard:

Measurement is a set of observations that reduce uncertainty where the result is expressed as a quantity.

What’s this have to do with infosec?

Hubbard is saying that measurement is a set of observations and Lorenz is saying that any error in observation (measurement), renders an acceptable prediction as improbable.  The critical thing to note here is that Lorenz is talking about weather which is a chaotic system. Information systems are not chaotic systems (though some wiring closets imitate it well).  The point is that we should, in theory, be able to see a benefit (a reduction in uncertainty), even with error in our measurements.  In other words because we are not in a chaotic system we can deal with imperfect data.

I have a picture in my head to describe where information systems are on the continuum from simple to chaotic.  I see a ball in the center labeled “simple”, and flat rings circling it and expanding out from it.  The first ring being labeled “hard”, then “complex” and finally “chaotic” as the outer ring.   Weather systems would be in the outer ring.  For the I.T. folks, I would put an arrow pointing to the last bit of complexity right before chaos and write “you are here” on it.  Complexity is the “edge of chaos and infosec is based entirely on complex systems. Take this statement from “How Complex Systems Fail” by Richard Cook

Catastrophe is always just around the corner.
Complex systems possess potential for catastrophic failure. … The potential for catastrophic outcome is a hallmark of complex systems. It is impossible to eliminate the potential for such catastrophic failure; the potential for such failure is always present by the system’s own nature.

Go through this exercise with me: Think of the most important info system at your company (probably email service).  Think of all the layers of availability-protection piled on that to keep it operational.  Clustering, backups, hot-swap whateverables.  Would any of the technical staff in charge of it – the subject matter experts – ever be surprised if it stopped functioning?  It shouldn’t and in most cases, probably won’t, but who would be surprised?  I think everyone in this industry knows that the phone can ring at any moment with a message of catastrophe.  As Cook stated this attribute is a “hallmark of complex systems”.

Where now?

The first step in solving any problem is to realize there is a problem.

Just as it’s good to become aware of the cognitive biases we have towards high profile events (like bad weather forecasts), we also have biases and framing errors when it comes to complex technical systems.  For me, simply realizing this and writing it down is a step up from where I was yesterday.  I am beginning to grasp that there are no easy decisions – and just how much mass and momentum that statement has.

Categories: General Security Tags: ,