There are very few things more valuable to me than someone constructively challenging my thoughts. I have no illusions thinking I’m right and I’m fully aware that there is always room for improvement in everything. That’s why I’m excited that lonervamp wrote up “embrace the value, any value, you can find” providing some interesting challenges to my previous post on “Yay! we have value now!”
Overall, I’d like to think we’re more in agreement than not, but I was struck by this quote:
Truly, we will actually never get anywhere if we don’t get business leaders to say, "We were wrong," or "We need guidance." These are the same results as, "I told ya so," but a little more positive, if you ask me. But if leaders aren’t going to ever admit this, then we’re not going to get a chance to be better, so I’d say let ’em fall over.
Crazy thought here… What if they aren’t wrong? What if security folks are wrong? I’m not going to back that up with anything yet. But just stop and think for a moment, what if the decision makers have a better grasp on expected loss from security breaches than security people? What would that situation look like? What data would we expect to find to make them right and security people wrong? Why do some security people find some pleasure when large breaches occur? Stop and picture those for a while.
I don’t think anyone would say it’s that black and white and I don’t think there is a clear right or wrong here, but I thought I’d attempt to shift perspectives there, see if we could try on someone else’s shoes. I tend to think that hands down, security people can describe the failings of security way better than any business person. However, and this is important, that’s not what matters to the business. I know that may be a bit counter-intuitive, our computer systems are compromised by the bits and bytes. The people with the best understanding of those are the security people, how can they not be completely right in defining what’s important? I’m not sure I can explain it, but that mentality is represented in the post that started this discussion. This sounds odd, but perhaps security practitioners know too much. Ask any security professional to identify al the ways the company could be shut down by attackers and it’d probably be hard to get them to stop. Now figure out how many companies have experienced losses anything close to those and we’ve got a very, very short list. That is probably the disconnect.
Let me try and rephrase that, while security people are shouting that our windows are susceptible to bricks being thrown by anyone with an arm (which is true), leaders are looking at how often bricks are thrown and the expected loss from it (which isn’t equal to the shouting and also true). That disconnect makes security people lose credibility (“it’s partly cloudy, why are they saying there’s a tornado?”) and vice versa (“But Sony!”). I go back to neither side is entirely wrong, but we can’t be asking leadership to admit they’re wrong without some serious introspection first.
I’d like to clarify my point #3 too. Ask the question: how many hack-worthy targets are there? Whether explicit or not, everyone has answered this in there head, most everyone is probably off (including me). When we see poster children like RSA, Sony, HBGary and so on. We have to ask ourselves how likely is it that we are next? There are a bazillion variables in that question, but let’s just consider it as a random event (which is false, but the exercise offers some perspective). First, we have to picture “out of how many?” Definitely not more than 200 Million (registered domain names), and given there are 5 Million U.S. companies (1.1 Million making over 1M, 7,500 making over 250M), can we take a stab at how many hack-worthy targets there are in the world? 10 thousand? Half a million? Whatever that figure is, compare it to the number of seriously impactful breaches in a year. 1? 5? 20? 30? Whatever you estimate here, it’s a small, tiny number. Let’s take worst case of 30/7,500 (max breaches over min hack-worthy) that comes out to a 1 in 250 chance. That’s about the same chance a white person in the US will die of myeloma or that a U.S. female will die of brain cancer. It might even be safe to say that in any company, female employees will die of brain cancer more often than a major/impactful security breach will occur. Weird thought, but that’s the fun of reference data points and quick calculations.
This is totally back-of-the-napkin stuff, but people do these calculations without reference data and in their head. Generally people are way off on these estimations. It’s partly why we think Sony is more applicable than it probably is (and why people buy lottery tickets). The analogy LonerVamp made about the break-ins in the neighborhood doesn’t really work, it puts the denominator too small in our heads. Neighborhoods are pictured, I’d guess as a few dozen, maybe 100 homes max, and makes us think we’re much more likely to be the next target. Perhaps we could say, “imagine you live in a neighborhood of 10,000 houses and one of them was broken into…” (or whatever the estimate of hack-worthy targets is).
I bet there’s an interesting statistic in there, that 63% percent of companies think they are in the top quarter of prime hack-worthy targets. (yeah, made that up, perhaps there’s some variation of the Dunning-Kruger effect for illusory hack-worthiness). Anyway, I’m cutting the rest of my points for the sake of readability. I’d love to continue this discussion and I hope I didn’t insult lonervamp (or anyone else) in this discussion, that isn’t my intent. I’m trying to state my view of the world and hope that others can point me in whatever direction makes more sense.
I haven’t written in a while, but I was moved to bang on the keyboard by a post over at Risky Biz. I don’t want to pick on the author, he’s expressing an opinion held by many security people. What I do want to talk about is the thinking behind “Why we secretly love LulzSec”. Because this type of thinking is, I have to say it: sophomoric.
Problem #1: It assumes there is some golden level of “secure enough” that everyone should aspire too. If a company doesn’t put a moat with some type of flesh eating animal in it, they’re a bunch of idiots and they deserve to be bullrushed because it’s risky to not have a moat, right? Wrong, this type of thinking kills credibility and diminishes the influence infosec can have on the business (basically this thinking turns otherwise smart people into whiners). The result is that the good ideas of security people are dismissed and little-or-no progress is made which leads to…
Problem #2: Implies that security people know the business better than the business leaders. Maybe this is caused by an availability bias but some of the most inconsistent and irrational ranting I have seen, have come from information security professionals. I haven’t seen anyone else make a fervent pitch for (what is seen as obvious) change and walk out rejected and have no idea why. This is closely related to the first problem – this thinking implies that information security is an absolute and whatever the goals and objectives are for the company, they should all still want to be secure. That just isn’t reality. Risk tolerance is relative, multi-faceted, usually in a specific context and really hard to communicate. I think @ristical said it best (and I’m paraphrasing) with “leadership doesn’t care about *your* risk tolerance”.
Problem #3: This won’t change most people’s opinion of the role of corporate information security. Saying “I told you so” will put you back into problem #2. It’s simple numbers. We’re pushing 200 Million domain names, the U.S. has over 5 million companies and we’re going to see a record, what, 15-20 large breaches this year? Odds are pretty good, whatever company we’re working at won’t be a victim this year. There are some flaws in this point here (and exploring these flaws is where I think we can make improvements), but this is the perception of decisions makers, and that brings us to the final problem with this thinking. We need more tangible proof to really believe in hard-to-fix things like global warming: we fix broken stuff when the pain of not fixing something hurts more than fixing something. And let’s be honest, in the modern complex network of complex systems, fixing security is deceptively hard, it’s going to have to hurt a lot for the current needle to be moved, the entire I.T. industry is built on our high tolerance for risk and most companies just aren’t seeing that level of comparable pain.
Problem #4: Companies are as insecure as they can be (hat tip to Marcus Ranum who I believe said this about the internet). To restate that, we’re not broken enough to change. Despite all the deficiencies in infosec and the ease with which companies can fall to script kiddies (who are now armed to the teeth), we are still functioning, we are still in business. Don’t get me wrong, the amount of resources devoted to infosec has increased exponentially in the last 15 years. Companies care about information security, but in proportion to the other types of risks they are facing as well.
Are companies blatantly vulnerable to attacks? Hellz ya. Do I secretly love LulzSec? Hellz No (aside from the joy of watching a train wreck unfold and some witty banter). I don’t see the huge momentum in information security being shifted by a “told ya so” mentality. I only see meaningful change through visibility, metrics and analysis and even then only from within the system. Yes, companies may be technically raped in short order, but that doesn’t mean previous security decisions were bad. We didn’t necessarily make a bad decisions building a house just because a tornado tore it down. Let’s keep perspective here. Whether or not Sony put on a red dress and walked around like a whore doesn’t make them any less of a victim of rape and the attackers any less like criminals and security professionals should be asking why there is a difference in risk tolerance rather than saying “I told you so.”