I’ve written about this topic at least a half dozen times now, I’ve saved each one as a draft and I’m giving up – I’m asking for help. I was inspired to do this by a video “Where Good Ideas Come From” (Bob Blakely posted the link in twitter). I can’t find the answer to this puzzle, at least not in any meaningful way, not by myself. I’ll break down where I am in the thought process and hope that I get some feedback. (note: For the purpose of this discussion I’m using “security” as a group of people and technology intended to protect assets.)
The goal of business is pretty well understood. For-profit companies are after profit and all the things that would affect that, reputation and customer confidence being on the top of the list for information security. From a business perspective, what I think is considered a successful security program is spending just enough on security and not too much. Spending too much on security should not be considered a success as well as failed security (though not equally). The goal isn’t perfect security, the goal is managed security. There is a point of diminishing returns in that spending, at some point there is just enough security.
I think of a production line manufacturing some physical widget. While it’d be really cool to have zero defects, most businesses spend just enough to make the product defects within some tolerance level. Translating to infosec, the goal from a business perspective is to spend enough (on security) to meet some level of business risk tolerance. That opens up a whole different discussion that I’ll avoid for now. But my point is that there should be a holistic view to information security. Since the goal of protecting information is only one variable to reach the goal of being profitable – there could easily be a good decision to increase spending and training for public relations staff to respond to any breach rather than preventing a specific subset of breaches themselves. Having the goal in mind enables those types of flexible trade offs.
Most every infosec talk I go to the goal appears to be security for the sake of security. In other words, the goal is to have not have security fail. The result is that the focus is shifted onto prevention and statements of risk stop short of being meaningful. “If X and Y happen an attacker will have an account on host Z.” is a statement on security, not risk. It’s a statement of action with impact to security not an impact to the broader goal. This type of focus devalues detective controls in the overall risk/value statement (everyone creates a mental measurement of risk/value in their own head). Once a detective control like logging is triggered in a breach, the security breach has occurred. The gap is in that the real reason we’re fighting—the bigger picture—the goal, hasn’t yet been impacted. However, and this is important, because the risk is perceived from a security perspective, emphasis and priorities are often misplaced. Hence, the question in the title. I don’t think we should be fighting for good security, we should be fighting for good-enough security.
I think this may be a special case where the goal is in fact security, but I have very little experience here. I won’t waste time pontificating on the goals for government. But this type of thing factors into the discussion. If infosec in government has a different goal then private enterprises, where are the differences and similarities?
The simple statement of “Compliance != Security” implies that the goal is security. What are we fighting for? It becomes pretty clear why some of the compliant yet “bad” security decisions were made if we consider that the goal wasn’t security. Compliance is a business concern, the correlation to infosec is both a blessing and curse.
Where am I heading?
So I’m seeing two major gaps as I type this. First thing is I don’t think there is any type of consensus around what our goal is in information security. My current thought is that perfect security is not the goal and that security is just a means to some other end. I think we should be focusing on where that end is and how we define “just enough” security in order to meet that. But please, help me understand that.
Second thing is the problem this causes, the “so what” of this post. We lack the ability to communicate security and consequently risk because we’re talking apples and oranges. I’ve been there, I’ve laid out a clear and logical case why some security thingamabob would improve security only to get some lame answer as to why I was shot down. I get that now. I wasn’t headed in the same direction as others in the conversation. The solution went towards my goal of security, not our goal of business. Once we’re all headed towards the same goals we can align assumptions and start to have more productive discussions.
For those who’ve watched that video I linked to in the opening, I’ve got half an idea. It’s been percolating for a long time and I can’t seem to find the trigger that unifies this mess. I’m putting this out there to hopefully trigger a response – a “here’s what I’m fighting for” type response. Because I think we’ve been heading in a dangerous direction focusing on security for the sake of security.
Jeff Lowder wrote up a thought provoking post, "Why the “Risk = Threats x Vulnerabilities x Impact” Formula is Mathematical Nonsense” and I wanted to get my provoked thoughts into print (and hopefully out of my head). I’m not going to disagree with Jeff for the most part. I’ve had many-a-forehead-palming moments seeing literal interpretations of that statement.
Threats, Vulnerabilities, Impact
As most everyone in ISRA is prone to do, I want to redefine/change those terms first off and then make a point. I’d much rather focus on the point than the terms themselves, but bear with me. When I redefine/change those terms, I don’t think I’ll be saying anything different from Jeff but I will be making them clearer in my own head as I talk about them.
In order for a risk to be realized, a force (threat) overcomes resistance (vulnerability) causing impact (bad things).
We are familiar with measuring forces and resistances (resistance is a force in the opposite direction) which is why we see another abused formula: Risk = Likelihood * Impact. Because threat and vulnerability are both a force and may be easily combined into this new “likelihood” (or insert whatever term represents that concept). And now here is the point:
For a statement of risk to have meaning the measurement of threat, resistance and impact cannot be combined nor simplified.
I’ll use acceleration as an example, acceleration is measured as the speed and direction of something over time. There are three distinct variables that are used to convey what acceleration is. We can not multiply speed and direction. We cannot derive some mathematical function to simplify speed and direction into a single number. It quite simply is stated as distinct variables. Meaning is derived by the combination of them. The same is true with a measurement of risk, we cannot combine the threat and the resistance to it and still maintain our meaning.
For example if we have a skilled attacker applying a force to a system with considerable resistance, it is completely not the same thing as my 8-yr old running metasploit against an unpatched system. Yet, if we attempt to combine these two scenarios we may end up with the same “likelihood” and they very clearly are different components of a risk with different methods of reducing each risk.
On Risk Models
Since any one system has multiple risks, saying that risk components cannot be combined or simplified is problematic. Most decision makers I’ve known really likey-the-dashboard. We want to be able to combine statements of risk with one another to create a consumable and meaningful input into the decision process. Enter the risk model. Since the relationships between risks and the components that make up a risk are complex, we want to do our best to estimate or simulate that combination. If we could account for every variable and interaction we’d do so, but we can’t. So we model reality, seek out feedback and see how it worked, then (in theory) we learn, adapt and try modeling again.
We cannot simplify a statement of risk into a single number, but we can state the components of risks as a probability a force will overcome resistance with a probable impact.
We want to be aware of how the components of risk do or don’t interact and account for that in our risk models. That’s where the secret sauce is.