Acts of Joe Algorithm
Every once in a while I am blessed with enough time to catch up on the writings of Gunnar Peterson and his post “Acts of God Algorithm” points out some problems being felt in the insurance space. He ends with this little nugget:
we have a similar situation in infosec where models are predicated on inside the firewall and outside the firewall, however that model divereged from reality about 10 years ago.
One of contributing factors to this problem is the controls-based versus scenario-based assessments. We get stuck thinking that if we just look at “common” controls or following someone else’s “best practices”, that we should be good. The fundamental flaw in that is thinking that the rules of the game are constant and the rules are anything but constant.
There are two constants in infosec: 1) people will always try to do things they shouldn’t and 2) everything else changes. It’s one thing to work in a field like accounting where the rules of math don’t change, 2 + 2 always equals 4. Engineers can learn the basic laws of physics and apply the same formulas they learn in school 30 years later. Those rules are relatively static. Sure, those fields have advancements, they learn how to do something better or more efficiently but the foundations they build on won’t change. We aren’t so lucky in infosec.
Controls Based Assessments
Let’s walk through a typical “risk assessment” at a very high level. This is the basic process as sold/promoted by various organizations and overpaid consultants:
- Start with list of controls (ISO/COBIT, etc) to check
- Walk down the list of controls
- When controls are insufficient, prioritize/rank/rate this “risk”
Without getting into all the problems and failures in this process I just want to focus on the first step: start with a list of controls. In other words, let’s assume that the rules are static. If we can figure out what stopped a person from doing something wrong in one instance, it must be good for all the others. We could apply things like “strong” password rules and continue to think that our network has a perimeter if the rules were static. In this way, people incorrectly assume that whatever prevented people from doing bad things yesterday will also perform the same way today.
Relying on controls for our security approach is becoming a bigger and bigger problem. It’s getting worse because the industry is being driven more and more by compliance and compliance is all about assessing controls. But think about this, how do we know when to update those controls? How do we know that web application firewalls may be appropriate? How are new controls ever introduced? I’m pretty sure it’s not from just looking at the list of controls.
Infosec is a chess game without structure, it’s about making decisions against a rational opponent who will adapt their actions, even change the rules, based on the decisions we just made. If we are to analyze and understand the security risks we’re facing we have to play the game out several moves ahead of time. We have to think in terms of “what if?” with a little “and then?” tossed in. We can’t sit back and ask everyone to take off their shoes before allowing them to pass.
Don’t get me wrong, I don’t want to say controls-based assessments are the root of all our problems. I think of controls-based assessments like an incremental system backup. It’s great for a while, because they can be effective in both time and resources. But they get out of date, inefficient and nothing beats having a full backup every once in a while. Taking a scenario-based approach to security enables an organization to reset their footing on level ground which then allow the incremental, control-based checks to continue adding value. The controls have to be fresh, otherwise we diverge from reality and get stuck playing today’s game by yesterday’s rules.