I was lucky enough last week to go through training on the FAIR methodology. After the course, I was sitting with Jack Jones and he asked me if the course met my expectations. I shook my head and said “no… it exceeded my expectations.” I wanted to write up my thoughts and experiences on this week.
Before the Training
I think it would be difficult to work around security and risk analysis and not know what FAIR is. It is talked about in blogs, mailing lists and forums. Most people treated it like it was sliced bread 2.0, although the initiated seemed overly fixated on terms and their proper use. But I didn’t go into this training naive. I already had a whole slew of mediocre risk methodologies under my belt. I’ve read most of the usual books on risk and related topics. I thought I was ready.
Reflections on the Training
I won’t go into the details of FAIR internals. But I will tell you that I was already familiar with most of the concepts. What made an impression on me was how all of these concepts were put into practice. If you catch me in person, get me talking about accuracy versus precision, or the value of subjectivity and estimation and I probably won’t shut up now.
The biggest take away from my FAIR training is this: FAIR is not a checklist methodology, it is a mindset. It’s a way of thinking about risks, the elements that make up risk and how those interrelate. Neither is FAIR a plug-a-couple-of-numbers-in method, it requires it’s own way of thinking, which as I’m learning, requires quite a bit of practice to do well.
I’ve seen a lot of hack risk methods that claim “based on FAIR”. Thinking that it’s possible to build something by reading the white paper and various interwebs is like building a violin from a picture of a Stradivarius and saying “based on Stradivarius”. It just can’t be done.
We, as infosec geeks, should not guess at the impact of breaches. Because we stink at it. Seriously. It’s like we are kids tying our shoes for the first time, it’s only a matter of time before we realize we should stop and ask for help. The dirty little secret I learned is that there are people already in my company that are far more capable at estimating things like notifying customers, or the cost of responding to breaches. I’m told that lawyers are pretty good at estimating costs of various legal proceedings. This realization was a big forehead-slapping "well duh" moment for me.
I ain’t done being trained. Even though the training is over, this shift in perspective won’t be easy (for me and my organization). I plan on leveraging the contacts I’ve made and asking a lot of questions. This stuff is not easy and I have no plan to walk it alone.
Every once in a while I come across something someone has written that really pokes my brain. When that happens I become obsessed and I allow myself to be consumed by whatever Google and Wikipedia dish out, which ultimately will lead to whatever articles or books I can get my hands on. The latest poking-prose is from Alex Hutton over on the Verizon Business Security Blog in a piece titled “Evidence Based Risk Management & Applied Behavioral Analysis.” At first, I wanted to rehash what I picked up from his post, but I think I’ll talk about where I ended up with it.
To set some perspective, I want to point out that people follow some repeatable process in their decisions. However, those decisions are often not logical or rational. In reality there is a varying gap between what science or logic would tell us to do and what we, as heuristic beings, actually do. Behavioral Economics, as Alex mentioned, is a field focused on observing how we make choices within an economics frame, and attempting to map out the rationale in our choices. Most of the advances in marketing are based on this fundamental approach – figure out what sells and use it to sell. I think accounting for human behavior is so completely under-developed in security that I’ve named this blog after it.
But just focusing on behaviors is not enough, we need context, we need a measuring stick to compare it again. We need to know where the ideal state lies so we know how we are diverging from it. I found a quote that introduces some new terms and summarized what I took away from Alex’s post. It’s from Stephen J. Hoch and Howard C. Kunreuther from the Wharton School and published in “Wharton on Making Decisions.” Within decision science (and I suspect most other sciences) there are three areas to focus the work to be done and it’s described like this:
The approach to decision making we are taking can be viewed at three different levels – what should be done based on rational theories of choice (normative models), what is actually done by individuals and groups in practice (descriptive behavior), and how we can improve decision making based on our understanding about differences between normative models and descriptive behavior (prescriptive recommendations).
From the view at my cheap seat, we stink at all three of these in infosec. Our goal is prescriptive recommendations, we want to be able to spend just enough on security and in the right priority. Yet our established normative models and our ability to describe behavior are lacking. We are stuck with this “do all of these controls” advice, without reason, without priority and without context. It just doesn’t get applied well in practice. So let’s back and look at our models (our theory). In order to develop better models, we need research and the feedback provided by evidence based risk management to develop what we should be doing in a perfect world (normative models). Then we need behavioral analysis to look at what we do in reality that works or doesn’t work (descriptive behavior). Because we will find that how we react to and mitigate infosec risks will diverge from a logical approach if we are able to define what a logical approach is supposed to look like in the first place.
Once we start to refine our normative models and understand the descriptive behavior, then and only then will we be able to provide prescriptive and useful recommendations.