Home > Decisions, General Security, Psychology, Risk > Improvements Lie Between Theory and Reality

Improvements Lie Between Theory and Reality

October 1, 2010

Every once in a while I come across something someone has written that really pokes my brain.  When that happens I become obsessed and I allow myself to be consumed by whatever Google and Wikipedia dish out, which ultimately will lead to whatever articles or books I can get my hands on.   The latest poking-prose is from Alex Hutton over on the Verizon Business Security Blog in a piece titled “Evidence Based Risk Management & Applied Behavioral Analysis.”  At first, I wanted to rehash what I picked up from his post, but I think I’ll talk about where I ended up with it.

To set some perspective, I want to point out that people follow some repeatable process in their decisions.  However, those decisions are often not logical or rational.  In reality there is a varying gap between what science or logic would tell us to do and what we, as heuristic beings, actually do.  Behavioral Economics, as Alex mentioned, is a field focused on observing how we make choices within an economics frame, and attempting to map out the rationale in our choices.  Most of the advances in marketing are based on this fundamental approach – figure out what sells and use it to sell.   I think accounting for human behavior is so completely under-developed in security that I’ve named this blog after it.

But just focusing on behaviors is not enough, we need context, we need a measuring stick to compare it again.  We need to know where the ideal state lies so we know how we are diverging from it.  I found a quote that introduces some new terms and summarized what I took away from Alex’s post.  It’s from Stephen J. Hoch and Howard C. Kunreuther from the Wharton School and published in “Wharton on Making Decisions.”  Within decision science (and I suspect most other sciences) there are three areas to focus the work to be done and it’s described like this:

The approach to decision making we are taking can be viewed at three different levels – what should be done based on rational theories of choice (normative models), what is actually done by individuals and groups in practice (descriptive behavior), and how we can improve decision making based on our understanding about differences between normative models and descriptive behavior (prescriptive recommendations).

From the view at my cheap seat, we stink at all three of these in infosec.  Our goal is prescriptive recommendations, we want to be able to spend just enough on security and in the right priority.  Yet our established normative models and our ability to describe behavior are lacking.   We are stuck with this “do all of these controls” advice, without reason, without priority and without context.  It just doesn’t get applied well in practice.  So let’s back and look at our models (our theory).  In order to develop better models, we need research and the feedback provided by evidence based risk management to develop what we should be doing in a perfect world (normative models).  Then we need behavioral analysis to look at what we do in reality that works or doesn’t work  (descriptive behavior).  Because we will find that how we react to and mitigate infosec risks will diverge from a logical approach if we are able to define what a logical approach is supposed to look like in the first place. 

Once we start to refine our normative models and understand the descriptive behavior, then and only then will we be able to provide prescriptive and useful recommendations.

  1. alex
    October 1, 2010 at 8:19 pm

    I would describe the normative > descriptive > prescriptive is close to what I would call EBRM, given that the descriptive is measured accurately, and prescriptive recommendations are based on that data.

  2. October 2, 2010 at 10:10 am

    I was hoping someone would bring that up. The “Evidence” part of EBRM made me wonder if I was putting it in the right bucket. Perhaps it’s safer to say that infosec risk management has traditionally been focused on normative models (for a selected few that are coherent about it), but what EBRM does is begin to account for the feedback received by evidence from both real events and descriptive behavior. So, a good EBRM program is capable of producing the prescriptive recommendations based on both the theory and reality.
    The part that hung me up was that just focusing on normative models is pointless without evidence and feedback, it would just be theory without meaning. But it is possible to create models of “perfect” action without accounting for the behavior of people (I’m thinking of utility theory).

  3. October 4, 2010 at 11:35 am

    Well I think Alex is getting closer but still no cigar. I think the second commentor (Augusto) on Alex’s post is more on the right track than he is.

    Why would one bother with measurement concepts of ABA applied to SIEM and log management in order to provide inferences about end behaviors when you can introduce high level controls (ones that supersede or overule all system activity) that simply govern the end behaviors directly?

    However, talk of trying to “induce” behavior change does not go far enough. Why not determine trust relationships between users, roles, etc., within and between groups, partners, and so on, a well as the applicable business rules in order to determine a range of acceptable behaviors (operational privileges) from which it becomes more reasonable, intuitive and logical to apply controls to prevent overstepping the bounds of that range? Why is this? Because many end behaviors condense down to allow or deny decisions on data access or ability to execute something. It becomes much more possible to imagine least privileges and deny-by default (a la Ranum) at this level. Then your logging becomes a matter of confirming compliance or flagging for real attempts at unauthorized behaviors.

    In terms of your post, what you discuss is from the status quo realm of discretionary access controls, or low assurance systems, trying to shape behaviors based on metrics. However, if there is anything being learned from things like APT and Stuxnet, it is the inadequacy of low assurance systems because of the absence of internal controls. High assurance requires authorization controls on a per user basis so that only allowed behaviors are actionable.

    To carry your thinking further, then this is prescriptive, even more so, perhaps deterministic(pre-determined). You don’t need this everywhere, but it makes sense when you are dealing with crown jewels or the keys to the kingdom.

  4. October 6, 2010 at 7:24 pm

    @Rob, Alex was drawing more connections between SIEM/logs and ABA than I. And I don’t disagree with anything you’re saying but I think I was way more abstract than where you went.
    First, I think there is some confusion around what behavior analysis is intended to do. It is not to determine acceptable behavior at all, and it’s not really to shape behaviors based on metrics. I’ll try to toss out a definition for behavioral analysis applied to security concepts (as I see it):
    Behavioral Security uses social, cognitive and emotional factors in understanding the security-impacting decisions of individuals and institutions performing security functions, including users, operators, administrators and attackers. The field is primarily concerned with the bounds of rationality (selfishness, self-control) of security agents.
    (that could be a lot better, I quickly ripped that off from Behavioral Economics).
    To me it’s much more about *understanding* behavior, especially when behavior deviates from the desirable. I’m thinking less B.F. Skinner and more Kahnemann and Tversky. But they’re playing similar songs.
    Second, If we were able to implement effective risk management (accounting for threats like APT and Stuxnet) we may very well understand just how much inadequacy low assurance system have, and what the effect of that is on our goals. We may very well start to quantify the unstable conditions of our low assurance systems and justify moving to this high assurance you speak of, but that’s all within the realm of risk analysis and risk management, not behavior analysis.

  5. October 10, 2010 at 9:18 pm

    Hi Jay,

    I am coming from the point of view of an end behavior enforcer, but I see value in what you say as well, especially in trying to understand behaviors in an irrational security market. It just seems so much more direct when you can simply say these are the behaviors that are acceptable and the rest are not. However, all of discussion has evolved from the natural history of discretionary systems.

    When it comes down to secrecy/confidentiality ( and integrity as well), you must be able to enforce policies, or they are worthless, and that is the realm of mandatory access controls and high assurance. I wonder if systems had been designed to be inherently secure from the get-go, with internal controls built in, would we be having these discussions? Probably not, as the industry has grown out of the problem of not having those controls in the first place.


  1. No trackbacks yet.
Comments are closed.
%d bloggers like this: