I was reading an article in Information Week on some scary security thing, and I got to the one and only comment on the post:
Most Individuals and Orgs Enjoy "Security" as a Matter of Luck
Comment by janice33rpm Nov 16, 2010, 13:24 PM EST
I know the perception, there are so many opportunities to well, improve our security, that people think it’s a miracle that a TJX style breach hasn’t occurred to them a hundred times over and it’s only a matter of time. But the breech data paints a different story than “luck”.
As I thought about it, that word “luck” got stuck in my brain like some bad 80’s tune mentioned on twitter. I started to question, what did “lucky” really mean? People who win while gambling could be “lucky”, lottery winners are certainly “lucky”. Let’s assume that lucky then means beating the odds for some favorable outcome, and unlucky means unfavorable, but still defying the odds. If my definition is correct then the statement in the comment is a paradox. “Most” of anything cannot be lucky, if most people who played poker won then it wouldn’t be lucky to win, it would just be unlucky to lose. But I digress.
I wanted to understand just how “lucky” or “unlucky” companies are as far as security, so I did some research. According to Wolfram Alpha there are just over 23 Million businesses in the 50 United States and I consider being listed in something like datalossDB.org would indicate a measurement of “not enjoying security” (security fail). Using three years from 2007-2009, I pulled the number of unique businesses from the year-end reports on datalossDB.org (321, 431 and 251). Which means that a registered US company has about a 1 in 68,000 chance of ending up on datalossDB.org. I would not call those not listed as “lucky”, that would be like saying someone is “lucky” if they don’t get a straight flush dealt to them in 5-card poker (1 in 65,000 chance of that)
But this didn’t sit right with me. That was a whole lot of companies and most of them could just be a company on paper and not be on the internet. I turned to the IRS tax stats, they showed that in 2007, 5.8 million companies filed returns. Of those about 1 million listed zero assets, meaning they are probably not on the internet in any measurable way. Now we have a much more realistic number, 4,852,748 businesses in 2007 listed some assets to the IRS. If we assume that all the companies in dataloss DB file a return, that there is a 1 in 14,471 chance for a US company to suffer a PII breach in a year (and be listed in the dataloss DB).
Let’s put this in perspective, based on the odds in a year of a US company with assets appearing on dataloss DB being 1 in 14,471:
- If you are female, it is more likely that you’ll die in a transportation accident in a year. (1 in 10,170)
- It is more likely that a person will visit an emergency department due to an accident involving pens or pencils (1 in 13,300)
- (my favorite) It is more likely that a person will visit an emergency department due to an accident involving a grooming device (1 in 10,200)
Aside from being really curious what constitutes as a grooming device, I didn’t want to stop there, so let’s remove a major chunk of companies whose reported assets were under $500,000. 3.8 million companies listed less then $500k in their returns to the IRS in 2007, so that leaves 982,123 companies in the US with assets over $500k. I am just going to assume that those “small” companies aren’t showing in the dataloss stats.
Based on being a US Company with over $500,000 in assets and appearing in dataloss DB at least once (1 in 2,928):
- It is more likely that a person will visit an emergency department due to an accident involving home power tools or saws (1 in 2,795)
- It is more likely that a Hispanic female 12 or older will be the victim of a purse-snatching or pickpocketing (1 in 2,500)
- And finally, is is more likely that a person 6 or older will participate in a non-traditional triathlon in a year (1 in 2,912)
Therefore, I think it’s paradoxically safe to say:
Most Individuals do not participate in a non-traditional triathlon as a Matter of Luck.
Truth is, it all goes down to probability, specifically the probability of a targeted threat event occurring. In spite of that threat event being driven by an adaptive adversary, the actions of people occur with some measurable frequency. The examples here are pretty good at explaining this point. Crimes are committed by adaptive adversaries as well, and we can see that about one out of every 2,500 Hispanic females 12 or older, will experience a loss event from purse-snatching or pickpocketing per year. In spite of being able to make conscious decisions, those adversaries commit these actions with astonishing predictability. Let’s face it, while there appears to be randomness on why everyone hasn’t has been pwned to the bone, the truth is in the numbers and it’s all about understanding the probability.
I’ve found several strange by-products as I’ve been evolving my risk analysis dogma. I’ve found that I’ve been challenging the traditional security dogma a whole lot more by asking “yeah but… so what?” I think this shift in my approach is best summed up by the first slide Jack Jones presented in FAIR training: “management doesn’t doesn’t care about security, they care about risk.” Meaning talking in terms of vulnerabilities found or what-if cases of just bad security is largely irrelevant. Whether we realize it or not, decision makers must translate that security message into a risk message because that’s what they care about. And that’s where disconnect occurs – the security geeks are flailing around about bad security and the decision makers are not seeing the correlation to risk.
I feel quite fortunate that I have a guy in my leadership chain that provides instantaneous feedback on which side I’m speaking on. His feedback is through subtle body language. If I slip into talking about bad security, he’ll lean back or check papers in front of him, perhaps look around. He’ll pretty much do anything except look like he cares. Now if I start talking in terms of probabilities, loss amounts or tangible business loss scenarios his eyes are front and center. It’s a nice feedback mechanism.
Even though the catchy phrase came from FAIR, it’s not an exclusive FAIR approach (though it lends itself beautifully to it), this is a universal perspective we need to adopt. Even if the assessment is putting likelihood and impact on a high/medium/low scale, if the loss is not a tangible loss it’s probably projecting FUD. Let me walk through an example:
The “What-If” Stolen Laptop
Here’s the scenario, a single-task tablet PC in a public (controlled) area. Not very specific, but this is how it was presented to me. The person presenting this to me was biased towards saying “no” to this new business project based on security. So the case for “no” was laid out: it was in a region with higher than average theft rates, if stolen, a skilled attacker could bypass multiple layers of controls and gain privileged information, possibly leading to a leap-frog attack back into our own network.
My first approach was to point out that the probability of these independent events all occurring is multiplicative but that punch failed to land. So I went with it and said, “let’s assume all that lines up… So What?”
“so they could get into critical system X”
“Okay, but so what?”
“so they could access confidential data”
“okay, but so what?”
You can see the pattern here and where I was heading. After quite a few rounds I had my traditional security thinker shifting his focus from thinking in terms of the security impact to business impact: costs of customer notifications, credit monitoring, etc. Using a white board and jotting down some wild guesses we tossed out a range of really bloated, bad-case dollar figures to try and convert the event to a comparable unit. It was fairly obvious that even if there was a loss event, our bad-case figures weren’t scary enough to run chicken little style through the halls. But the shift we made here was to talk about this “bad” thing in terms of business risk and not bad security.
What if we stopped before putting dollar figures on it? Let’s take credit monitoring. If we presented that we’d have to offer credit monitoring for some quantity of customers that still requires translation into risk. How much can we get a bulk purchase of credit monitoring for? What is the adoption rate by customers of the offer? Answering these questions not only gives the decision makers a better understanding of security risks but also gives the security practitioner an understanding of business.
Knife to a Gunfight
I think this is the type of thing that drives me crazy about discussing security with some “traditional” pentesters and uninitiated auditors. The word “fail” is tossed around way to easily. Even though it’s fun to slap “FAIL” on things, there is no fail, only more or less probable loss and weak or missing controls does not a loss event make. The point is this, we cannot bring a knife to a gunfight. Wait, let me restate that, we can’t bring security to a risk discussion. We have to start asking ourselves “so what” and determining what the real loss events are and more importantly what that means to the business.