There are very few things more valuable to me than someone constructively challenging my thoughts. I have no illusions thinking I’m right and I’m fully aware that there is always room for improvement in everything. That’s why I’m excited that lonervamp wrote up “embrace the value, any value, you can find” providing some interesting challenges to my previous post on “Yay! we have value now!”
Overall, I’d like to think we’re more in agreement than not, but I was struck by this quote:
Truly, we will actually never get anywhere if we don’t get business leaders to say, "We were wrong," or "We need guidance." These are the same results as, "I told ya so," but a little more positive, if you ask me. But if leaders aren’t going to ever admit this, then we’re not going to get a chance to be better, so I’d say let ’em fall over.
Crazy thought here… What if they aren’t wrong? What if security folks are wrong? I’m not going to back that up with anything yet. But just stop and think for a moment, what if the decision makers have a better grasp on expected loss from security breaches than security people? What would that situation look like? What data would we expect to find to make them right and security people wrong? Why do some security people find some pleasure when large breaches occur? Stop and picture those for a while.
I don’t think anyone would say it’s that black and white and I don’t think there is a clear right or wrong here, but I thought I’d attempt to shift perspectives there, see if we could try on someone else’s shoes. I tend to think that hands down, security people can describe the failings of security way better than any business person. However, and this is important, that’s not what matters to the business. I know that may be a bit counter-intuitive, our computer systems are compromised by the bits and bytes. The people with the best understanding of those are the security people, how can they not be completely right in defining what’s important? I’m not sure I can explain it, but that mentality is represented in the post that started this discussion. This sounds odd, but perhaps security practitioners know too much. Ask any security professional to identify al the ways the company could be shut down by attackers and it’d probably be hard to get them to stop. Now figure out how many companies have experienced losses anything close to those and we’ve got a very, very short list. That is probably the disconnect.
Let me try and rephrase that, while security people are shouting that our windows are susceptible to bricks being thrown by anyone with an arm (which is true), leaders are looking at how often bricks are thrown and the expected loss from it (which isn’t equal to the shouting and also true). That disconnect makes security people lose credibility (“it’s partly cloudy, why are they saying there’s a tornado?”) and vice versa (“But Sony!”). I go back to neither side is entirely wrong, but we can’t be asking leadership to admit they’re wrong without some serious introspection first.
I’d like to clarify my point #3 too. Ask the question: how many hack-worthy targets are there? Whether explicit or not, everyone has answered this in there head, most everyone is probably off (including me). When we see poster children like RSA, Sony, HBGary and so on. We have to ask ourselves how likely is it that we are next? There are a bazillion variables in that question, but let’s just consider it as a random event (which is false, but the exercise offers some perspective). First, we have to picture “out of how many?” Definitely not more than 200 Million (registered domain names), and given there are 5 Million U.S. companies (1.1 Million making over 1M, 7,500 making over 250M), can we take a stab at how many hack-worthy targets there are in the world? 10 thousand? Half a million? Whatever that figure is, compare it to the number of seriously impactful breaches in a year. 1? 5? 20? 30? Whatever you estimate here, it’s a small, tiny number. Let’s take worst case of 30/7,500 (max breaches over min hack-worthy) that comes out to a 1 in 250 chance. That’s about the same chance a white person in the US will die of myeloma or that a U.S. female will die of brain cancer. It might even be safe to say that in any company, female employees will die of brain cancer more often than a major/impactful security breach will occur. Weird thought, but that’s the fun of reference data points and quick calculations.
This is totally back-of-the-napkin stuff, but people do these calculations without reference data and in their head. Generally people are way off on these estimations. It’s partly why we think Sony is more applicable than it probably is (and why people buy lottery tickets). The analogy LonerVamp made about the break-ins in the neighborhood doesn’t really work, it puts the denominator too small in our heads. Neighborhoods are pictured, I’d guess as a few dozen, maybe 100 homes max, and makes us think we’re much more likely to be the next target. Perhaps we could say, “imagine you live in a neighborhood of 10,000 houses and one of them was broken into…” (or whatever the estimate of hack-worthy targets is).
I bet there’s an interesting statistic in there, that 63% percent of companies think they are in the top quarter of prime hack-worthy targets. (yeah, made that up, perhaps there’s some variation of the Dunning-Kruger effect for illusory hack-worthiness). Anyway, I’m cutting the rest of my points for the sake of readability. I’d love to continue this discussion and I hope I didn’t insult lonervamp (or anyone else) in this discussion, that isn’t my intent. I’m trying to state my view of the world and hope that others can point me in whatever direction makes more sense.
Secure 360 is a two-day security conference held every May in Saint Paul, MN and I’ve been helping with the speaker selection for 4 years in a row. This year is different though because I volunteered to co-chair the program committee. This year we had over 130 submissions for just over 50 speaking slots and a loose committee of about 20 volunteers. We’ve had a variety of approaches over the years, but I couldn’t help think that there must be a better way to do it.
I decided to tap all my connections and see how many other speaker coordinators I could talk to – I mean someone, somewhere must have “the secret”. So I hit twitter and sent emails. I made quite a few connections and got to talk to some good people. It was great to learn how most of the current conferences select their speakers. But it was disappointing to learn that none of them had anything better. Turns out there is no secret sauce and a wet-finger-in-the-wind is about as good as we’ve got.
However, I did pick up a few nuggets here and there. I pieced some things together and came up with a process that I think worked pretty good this year. So rather than keep it a secret I wanted to share how we selected speakers this year.
Step 1: Guiding the guesswork
Selecting speakers is mostly guess work. Submissions come in from everywhere and chances are good speakers are selected solely on the material they submit. Asking the right questions and drawing information out of potential speakers is important. I’ve also learned from previous years that limiting how much is drawn is almost just as important. There are a minority of speakers that like to publish a paper in every field so put a hard limit on the information gathered if that’s possible.
We modified our fields slightly from previous years. We asked for a brief synopsis up front, this is what would get published in the conference material, but then we asked for a detailed outline. I was hoping for more information and I wanted to see if we could deduce quality from that field. Honestly, the addition of the detail only helped in about 30% of the submissions or so. One of the best things we did is had up to five “learning points” and I found myself referring to those often. It was more often than not that the learning points showed more of the speakers intention than the verbose detailed outline. I highly suggest both though.
We also tried to accepting links to online videos. I figured that the more proficient speakers will have something online and we could watch them in action. Truth is we had less than 5% use of that field and of those, I don’t think many volunteers watched them let alone checked.
Step 2: Pre-Voting
There’s another step in there, “get a bunch of submissions,” but I’m skipping that. We’re pretty lucky that we had some skill programming on our website. We were able to do some good things like set up online voting. I had relatively few instructions for the voting:
- Accept a single vote (1-5) per voter, per session
- Accept a comment (140 characters) and tie to person (for questions/follow up)
- minimize clicking by voters and display all the necessary information on one page
In some previous years votes were accepted on both speaker and session, some other years multiple votes were collected. like “relevance” and “speaker knowledge”. I highly recommend to keep the voting dead simple and I cannot stress that enough. When it comes to step 4, the voting is purely one data point of many and it was often overruled.
Accepting comments was a stroke of brilliance that I picked up in my connections. We end up doing final speaker selection in a single day and not everyone can (or will) attend that session. I wanted to give everyone a chance to be heard and those comments enabled input from people that were not able to get out of the office on a Friday.
In previous years we had to click around to look at speaker bio and click back to session information. Getting through and voting on submissions is a chore. Every extra click would be compounded by the quantity of submissions – it had be easy or people would get burnt out quicker and less votes (and comments) would come in.
Step 3: Compile the results
This was hard. We usually physically get together to pick speakers and we need the speaker information to do that. I ended up getting in touch with my roots and writing perl code. I got a full mysql dump of the database and I broke about every good rule for developers to pull out and present the information I thought folks wanted. I knew this was mostly a one-shot deal (except for perhaps next year) so I wrote quick-n-dirty. I think it was about 15-20 hours, but in reality, it had to be much more. My code spit out html, I then opened that in MS Word and did some final formatting.
I set up two sections in the material, “At a Glance” and “Detailed Sessions”. I wanted a way to compare sessions quickly and yet offer a reference for details. I assumed most people would stare at summary information so I tried to fit as much information as I could in there. I’ll change up some names and give an example and walk through it.
I wanted show both the total votes (in this case “John” got 3-3’s, 5-4’s and 3-5’s, 3 was “okay”) and the overall score (I weighted as 1, 2, 3, 5, 8 and displayed the mean). Under the title and speaker name, I put the comments. We had several very chatty people who couldn’t make the meeting and like I said, it was great to get their input even though they could not attend. Now one very useful thing I did is that I compiled the feedback from previous years and included that here. In this case, “John” was rated in the top 25% (compared to other speakers that year). Using that historical data proved to be very, very helpful. In this example there are several good things here to select John on (that “1” on the left is his initial ranking in this category).
Step 4: Select Speakers
Not a whole lot of special sauce here. Sit down and start picking people. Some conference organizers were lucky enough to do this at a bar. We met at 8am and went to almost 4pm, so drinking during it didn’t seem like a wise option. We had about 12 people show up and we broke off into small groups to tackle the different categories. Then looped back together and looked at conflicts and overlaps. All in all a good experience and I’m leaving out a lot of the details so if you’re selecting speakers, please reach out and I’ll talk your ear off.
I’ve written about this topic at least a half dozen times now, I’ve saved each one as a draft and I’m giving up – I’m asking for help. I was inspired to do this by a video “Where Good Ideas Come From” (Bob Blakely posted the link in twitter). I can’t find the answer to this puzzle, at least not in any meaningful way, not by myself. I’ll break down where I am in the thought process and hope that I get some feedback. (note: For the purpose of this discussion I’m using “security” as a group of people and technology intended to protect assets.)
The goal of business is pretty well understood. For-profit companies are after profit and all the things that would affect that, reputation and customer confidence being on the top of the list for information security. From a business perspective, what I think is considered a successful security program is spending just enough on security and not too much. Spending too much on security should not be considered a success as well as failed security (though not equally). The goal isn’t perfect security, the goal is managed security. There is a point of diminishing returns in that spending, at some point there is just enough security.
I think of a production line manufacturing some physical widget. While it’d be really cool to have zero defects, most businesses spend just enough to make the product defects within some tolerance level. Translating to infosec, the goal from a business perspective is to spend enough (on security) to meet some level of business risk tolerance. That opens up a whole different discussion that I’ll avoid for now. But my point is that there should be a holistic view to information security. Since the goal of protecting information is only one variable to reach the goal of being profitable – there could easily be a good decision to increase spending and training for public relations staff to respond to any breach rather than preventing a specific subset of breaches themselves. Having the goal in mind enables those types of flexible trade offs.
Most every infosec talk I go to the goal appears to be security for the sake of security. In other words, the goal is to have not have security fail. The result is that the focus is shifted onto prevention and statements of risk stop short of being meaningful. “If X and Y happen an attacker will have an account on host Z.” is a statement on security, not risk. It’s a statement of action with impact to security not an impact to the broader goal. This type of focus devalues detective controls in the overall risk/value statement (everyone creates a mental measurement of risk/value in their own head). Once a detective control like logging is triggered in a breach, the security breach has occurred. The gap is in that the real reason we’re fighting—the bigger picture—the goal, hasn’t yet been impacted. However, and this is important, because the risk is perceived from a security perspective, emphasis and priorities are often misplaced. Hence, the question in the title. I don’t think we should be fighting for good security, we should be fighting for good-enough security.
I think this may be a special case where the goal is in fact security, but I have very little experience here. I won’t waste time pontificating on the goals for government. But this type of thing factors into the discussion. If infosec in government has a different goal then private enterprises, where are the differences and similarities?
The simple statement of “Compliance != Security” implies that the goal is security. What are we fighting for? It becomes pretty clear why some of the compliant yet “bad” security decisions were made if we consider that the goal wasn’t security. Compliance is a business concern, the correlation to infosec is both a blessing and curse.
Where am I heading?
So I’m seeing two major gaps as I type this. First thing is I don’t think there is any type of consensus around what our goal is in information security. My current thought is that perfect security is not the goal and that security is just a means to some other end. I think we should be focusing on where that end is and how we define “just enough” security in order to meet that. But please, help me understand that.
Second thing is the problem this causes, the “so what” of this post. We lack the ability to communicate security and consequently risk because we’re talking apples and oranges. I’ve been there, I’ve laid out a clear and logical case why some security thingamabob would improve security only to get some lame answer as to why I was shot down. I get that now. I wasn’t headed in the same direction as others in the conversation. The solution went towards my goal of security, not our goal of business. Once we’re all headed towards the same goals we can align assumptions and start to have more productive discussions.
For those who’ve watched that video I linked to in the opening, I’ve got half an idea. It’s been percolating for a long time and I can’t seem to find the trigger that unifies this mess. I’m putting this out there to hopefully trigger a response – a “here’s what I’m fighting for” type response. Because I think we’ve been heading in a dangerous direction focusing on security for the sake of security.
With the holidays upon us and all that happy-good-cheer crap going around, I thought I would try it and see if I couldn’t give back a little. Perhaps I could even spark a little introspection as we look toward the new year. Throughout the years I’ve picked up many little pearls of wisdom, and for those I haven’t forgotten, I’ve compiled them into my top 5 rules to live by (for infosec).
Rule 1: Don’t order steak in a burger joint.
This is always my number 1 rule and comes via my father growing up. Knowing how to adjust expectations is critical. Being aware of the surroundings and everyone’s capabilities is important. The steak reference is easy to picture and identify with, but this manifests itself daily and much more subtly. A stone castle can’t be built out of sand, and a problem can’t be solved if people don’t see it. It’s amazing and a little scary to realize how many mediocre burger joints there are.
Rule 2: Assume the hired help may actually want to help
Once there is awareness about the environment, understand that people generally want to do the right thing. This is a hard thing to accept in infosec because the job is full of people making bad decisions and it’s easy to make fun of “stupid” people and mentally stamp a FAIL on their forehead. But I found if I write off someone as incompetent I also write off the ability to learn from them. Once I made this mental shift I was surprised at how smart people can be and how much I can learn from others – especially in their moments of failures. Plus most problems have a more interesting root cause then negligence, if we can look for it.
Rule 3: Whatever you are thinking of doing it’s probably been done before, been done better, by someone smarter, and there is a book about it.
…or “Read early, read often.” This is critical to improving and adapting. Even if it hasn’t been done directly, then someone has done something similar, perhaps in some other field. Find out, look around, ask questions, talk to co-workers, neighbors, kids and pets. Sometimes finding things to imitate can come from weird places. If none of that works, it’s always possible to think up security analogies that involve a home, perhaps a car. (Note: please refrain from disclosing home/car analogies publicly, unless it’s for a comment on Schneier’s blog)
Rule 4: Don’t be afraid to look dumb.
Answering “I don’t know” is not only appropriate, it’s necessary. Get out on that dance floor and shake it like it you mean it. Because hey, anyone can look good doing the robot if they commit to it.
Rule 5: Find someone to mock you.
This is invaluable. Whether we realize it not, infosec is a nascent field. It’s relatively easy to look like a rock star, but detrimental to believe it. Having someone around to bring up Rule #3 (repeatedly) is very important because it removes complacency. There is always room for improvement.
So there we have it, the top 5 rules to live by (for infosec). I would be interested to know what rules others come back to. If anyone has some send them my way, because rule 3 does apply to lists of rules to live by.
I was reading an article in Information Week on some scary security thing, and I got to the one and only comment on the post:
Most Individuals and Orgs Enjoy "Security" as a Matter of Luck
Comment by janice33rpm Nov 16, 2010, 13:24 PM EST
I know the perception, there are so many opportunities to well, improve our security, that people think it’s a miracle that a TJX style breach hasn’t occurred to them a hundred times over and it’s only a matter of time. But the breech data paints a different story than “luck”.
As I thought about it, that word “luck” got stuck in my brain like some bad 80’s tune mentioned on twitter. I started to question, what did “lucky” really mean? People who win while gambling could be “lucky”, lottery winners are certainly “lucky”. Let’s assume that lucky then means beating the odds for some favorable outcome, and unlucky means unfavorable, but still defying the odds. If my definition is correct then the statement in the comment is a paradox. “Most” of anything cannot be lucky, if most people who played poker won then it wouldn’t be lucky to win, it would just be unlucky to lose. But I digress.
I wanted to understand just how “lucky” or “unlucky” companies are as far as security, so I did some research. According to Wolfram Alpha there are just over 23 Million businesses in the 50 United States and I consider being listed in something like datalossDB.org would indicate a measurement of “not enjoying security” (security fail). Using three years from 2007-2009, I pulled the number of unique businesses from the year-end reports on datalossDB.org (321, 431 and 251). Which means that a registered US company has about a 1 in 68,000 chance of ending up on datalossDB.org. I would not call those not listed as “lucky”, that would be like saying someone is “lucky” if they don’t get a straight flush dealt to them in 5-card poker (1 in 65,000 chance of that)
But this didn’t sit right with me. That was a whole lot of companies and most of them could just be a company on paper and not be on the internet. I turned to the IRS tax stats, they showed that in 2007, 5.8 million companies filed returns. Of those about 1 million listed zero assets, meaning they are probably not on the internet in any measurable way. Now we have a much more realistic number, 4,852,748 businesses in 2007 listed some assets to the IRS. If we assume that all the companies in dataloss DB file a return, that there is a 1 in 14,471 chance for a US company to suffer a PII breach in a year (and be listed in the dataloss DB).
Let’s put this in perspective, based on the odds in a year of a US company with assets appearing on dataloss DB being 1 in 14,471:
- If you are female, it is more likely that you’ll die in a transportation accident in a year. (1 in 10,170)
- It is more likely that a person will visit an emergency department due to an accident involving pens or pencils (1 in 13,300)
- (my favorite) It is more likely that a person will visit an emergency department due to an accident involving a grooming device (1 in 10,200)
Aside from being really curious what constitutes as a grooming device, I didn’t want to stop there, so let’s remove a major chunk of companies whose reported assets were under $500,000. 3.8 million companies listed less then $500k in their returns to the IRS in 2007, so that leaves 982,123 companies in the US with assets over $500k. I am just going to assume that those “small” companies aren’t showing in the dataloss stats.
Based on being a US Company with over $500,000 in assets and appearing in dataloss DB at least once (1 in 2,928):
- It is more likely that a person will visit an emergency department due to an accident involving home power tools or saws (1 in 2,795)
- It is more likely that a Hispanic female 12 or older will be the victim of a purse-snatching or pickpocketing (1 in 2,500)
- And finally, is is more likely that a person 6 or older will participate in a non-traditional triathlon in a year (1 in 2,912)
Therefore, I think it’s paradoxically safe to say:
Most Individuals do not participate in a non-traditional triathlon as a Matter of Luck.
Truth is, it all goes down to probability, specifically the probability of a targeted threat event occurring. In spite of that threat event being driven by an adaptive adversary, the actions of people occur with some measurable frequency. The examples here are pretty good at explaining this point. Crimes are committed by adaptive adversaries as well, and we can see that about one out of every 2,500 Hispanic females 12 or older, will experience a loss event from purse-snatching or pickpocketing per year. In spite of being able to make conscious decisions, those adversaries commit these actions with astonishing predictability. Let’s face it, while there appears to be randomness on why everyone hasn’t has been pwned to the bone, the truth is in the numbers and it’s all about understanding the probability.
Every once in a while I come across something someone has written that really pokes my brain. When that happens I become obsessed and I allow myself to be consumed by whatever Google and Wikipedia dish out, which ultimately will lead to whatever articles or books I can get my hands on. The latest poking-prose is from Alex Hutton over on the Verizon Business Security Blog in a piece titled “Evidence Based Risk Management & Applied Behavioral Analysis.” At first, I wanted to rehash what I picked up from his post, but I think I’ll talk about where I ended up with it.
To set some perspective, I want to point out that people follow some repeatable process in their decisions. However, those decisions are often not logical or rational. In reality there is a varying gap between what science or logic would tell us to do and what we, as heuristic beings, actually do. Behavioral Economics, as Alex mentioned, is a field focused on observing how we make choices within an economics frame, and attempting to map out the rationale in our choices. Most of the advances in marketing are based on this fundamental approach – figure out what sells and use it to sell. I think accounting for human behavior is so completely under-developed in security that I’ve named this blog after it.
But just focusing on behaviors is not enough, we need context, we need a measuring stick to compare it again. We need to know where the ideal state lies so we know how we are diverging from it. I found a quote that introduces some new terms and summarized what I took away from Alex’s post. It’s from Stephen J. Hoch and Howard C. Kunreuther from the Wharton School and published in “Wharton on Making Decisions.” Within decision science (and I suspect most other sciences) there are three areas to focus the work to be done and it’s described like this:
The approach to decision making we are taking can be viewed at three different levels – what should be done based on rational theories of choice (normative models), what is actually done by individuals and groups in practice (descriptive behavior), and how we can improve decision making based on our understanding about differences between normative models and descriptive behavior (prescriptive recommendations).
From the view at my cheap seat, we stink at all three of these in infosec. Our goal is prescriptive recommendations, we want to be able to spend just enough on security and in the right priority. Yet our established normative models and our ability to describe behavior are lacking. We are stuck with this “do all of these controls” advice, without reason, without priority and without context. It just doesn’t get applied well in practice. So let’s back and look at our models (our theory). In order to develop better models, we need research and the feedback provided by evidence based risk management to develop what we should be doing in a perfect world (normative models). Then we need behavioral analysis to look at what we do in reality that works or doesn’t work (descriptive behavior). Because we will find that how we react to and mitigate infosec risks will diverge from a logical approach if we are able to define what a logical approach is supposed to look like in the first place.
Once we start to refine our normative models and understand the descriptive behavior, then and only then will we be able to provide prescriptive and useful recommendations.
Every once in a while I am blessed with enough time to catch up on the writings of Gunnar Peterson and his post “Acts of God Algorithm” points out some problems being felt in the insurance space. He ends with this little nugget:
we have a similar situation in infosec where models are predicated on inside the firewall and outside the firewall, however that model divereged from reality about 10 years ago.
One of contributing factors to this problem is the controls-based versus scenario-based assessments. We get stuck thinking that if we just look at “common” controls or following someone else’s “best practices”, that we should be good. The fundamental flaw in that is thinking that the rules of the game are constant and the rules are anything but constant.
There are two constants in infosec: 1) people will always try to do things they shouldn’t and 2) everything else changes. It’s one thing to work in a field like accounting where the rules of math don’t change, 2 + 2 always equals 4. Engineers can learn the basic laws of physics and apply the same formulas they learn in school 30 years later. Those rules are relatively static. Sure, those fields have advancements, they learn how to do something better or more efficiently but the foundations they build on won’t change. We aren’t so lucky in infosec.
Controls Based Assessments
Let’s walk through a typical “risk assessment” at a very high level. This is the basic process as sold/promoted by various organizations and overpaid consultants:
- Start with list of controls (ISO/COBIT, etc) to check
- Walk down the list of controls
- When controls are insufficient, prioritize/rank/rate this “risk”
Without getting into all the problems and failures in this process I just want to focus on the first step: start with a list of controls. In other words, let’s assume that the rules are static. If we can figure out what stopped a person from doing something wrong in one instance, it must be good for all the others. We could apply things like “strong” password rules and continue to think that our network has a perimeter if the rules were static. In this way, people incorrectly assume that whatever prevented people from doing bad things yesterday will also perform the same way today.
Relying on controls for our security approach is becoming a bigger and bigger problem. It’s getting worse because the industry is being driven more and more by compliance and compliance is all about assessing controls. But think about this, how do we know when to update those controls? How do we know that web application firewalls may be appropriate? How are new controls ever introduced? I’m pretty sure it’s not from just looking at the list of controls.
Infosec is a chess game without structure, it’s about making decisions against a rational opponent who will adapt their actions, even change the rules, based on the decisions we just made. If we are to analyze and understand the security risks we’re facing we have to play the game out several moves ahead of time. We have to think in terms of “what if?” with a little “and then?” tossed in. We can’t sit back and ask everyone to take off their shoes before allowing them to pass.
Don’t get me wrong, I don’t want to say controls-based assessments are the root of all our problems. I think of controls-based assessments like an incremental system backup. It’s great for a while, because they can be effective in both time and resources. But they get out of date, inefficient and nothing beats having a full backup every once in a while. Taking a scenario-based approach to security enables an organization to reset their footing on level ground which then allow the incremental, control-based checks to continue adding value. The controls have to be fresh, otherwise we diverge from reality and get stuck playing today’s game by yesterday’s rules.