I haven’t written in a while, but I was moved to bang on the keyboard by a post over at Risky Biz. I don’t want to pick on the author, he’s expressing an opinion held by many security people. What I do want to talk about is the thinking behind “Why we secretly love LulzSec”. Because this type of thinking is, I have to say it: sophomoric.
Problem #1: It assumes there is some golden level of “secure enough” that everyone should aspire too. If a company doesn’t put a moat with some type of flesh eating animal in it, they’re a bunch of idiots and they deserve to be bullrushed because it’s risky to not have a moat, right? Wrong, this type of thinking kills credibility and diminishes the influence infosec can have on the business (basically this thinking turns otherwise smart people into whiners). The result is that the good ideas of security people are dismissed and little-or-no progress is made which leads to…
Problem #2: Implies that security people know the business better than the business leaders. Maybe this is caused by an availability bias but some of the most inconsistent and irrational ranting I have seen, have come from information security professionals. I haven’t seen anyone else make a fervent pitch for (what is seen as obvious) change and walk out rejected and have no idea why. This is closely related to the first problem – this thinking implies that information security is an absolute and whatever the goals and objectives are for the company, they should all still want to be secure. That just isn’t reality. Risk tolerance is relative, multi-faceted, usually in a specific context and really hard to communicate. I think @ristical said it best (and I’m paraphrasing) with “leadership doesn’t care about *your* risk tolerance”.
Problem #3: This won’t change most people’s opinion of the role of corporate information security. Saying “I told you so” will put you back into problem #2. It’s simple numbers. We’re pushing 200 Million domain names, the U.S. has over 5 million companies and we’re going to see a record, what, 15-20 large breaches this year? Odds are pretty good, whatever company we’re working at won’t be a victim this year. There are some flaws in this point here (and exploring these flaws is where I think we can make improvements), but this is the perception of decisions makers, and that brings us to the final problem with this thinking. We need more tangible proof to really believe in hard-to-fix things like global warming: we fix broken stuff when the pain of not fixing something hurts more than fixing something. And let’s be honest, in the modern complex network of complex systems, fixing security is deceptively hard, it’s going to have to hurt a lot for the current needle to be moved, the entire I.T. industry is built on our high tolerance for risk and most companies just aren’t seeing that level of comparable pain.
Problem #4: Companies are as insecure as they can be (hat tip to Marcus Ranum who I believe said this about the internet). To restate that, we’re not broken enough to change. Despite all the deficiencies in infosec and the ease with which companies can fall to script kiddies (who are now armed to the teeth), we are still functioning, we are still in business. Don’t get me wrong, the amount of resources devoted to infosec has increased exponentially in the last 15 years. Companies care about information security, but in proportion to the other types of risks they are facing as well.
Are companies blatantly vulnerable to attacks? Hellz ya. Do I secretly love LulzSec? Hellz No (aside from the joy of watching a train wreck unfold and some witty banter). I don’t see the huge momentum in information security being shifted by a “told ya so” mentality. I only see meaningful change through visibility, metrics and analysis and even then only from within the system. Yes, companies may be technically raped in short order, but that doesn’t mean previous security decisions were bad. We didn’t necessarily make a bad decisions building a house just because a tornado tore it down. Let’s keep perspective here. Whether or not Sony put on a red dress and walked around like a whore doesn’t make them any less of a victim of rape and the attackers any less like criminals and security professionals should be asking why there is a difference in risk tolerance rather than saying “I told you so.”
I was thinking about the plethora of absolutely crappy risk management methods out there and the commonalities they all end up sharing. I thought I’d help anyone wanting to either a) develop their own internal methodology or b) get paid for telling others how to do risk management. For them, I have created the following approach which enables people to avoid actually learning about information risk, uncertainty, decision science, cognitive biases or anything else usually important in creating and performing risk analysis.
The beauty of this approach is that it’s foundational. When people realize that it’s not actually helpful, it’s possible to build a new (looking) process by mixing up the terms, categories and breakdowns. While avoiding learning, people can stay in their comfort zone and do the same approach over and over in new ways each time. Everyone will be glad for the improvements until those don’t work out, then the re-inventing can occur all over again following this approach.
Here we go, the 7 steps to a risk management payout:
Step 1: Identify something to assess.
Be it an asset, system, process or application. This is a good area to allow permutations in future generations of this method by creating taxonomies and overly-simplified relationships between these items.
Step 2: Take a reductionist approach
Reduce the item under assessment into an incomplete list of controls from an external source. Ignore the concept of strong emergence because it’s both too hard to explain and too hard for most anyone else to understand let alone think is real. Note: the list of controls must be from an external source because they’re boring as all get-out to create from scratch and it gives the auditor/assessor an area to tweak in future iterations as well. Plus, if this is ever challenged, it’s always possible to blame the external list of controls as being deficient.
Get a list of findings from the list of controls, but call them “risk items”. In future iterations it’s possible to change up that term or even to create something called a “balanced scorecard”, doesn’t matter what that is, just make something up that looks different than previous iterations and go on. Now it’s time for the real secret sauce.
Step 4: Categorize and Score (analyze)
Identify a list of categories on which to assess the findings and score each finding based on the category, either High/Medium/Low or 1-5 or something else completely irrelevant. I suggest the following two top-level categories as a base because it seems to captures what everyone is thinking anyway:
- A score based on the worst possible case that may occur, label this “impact” or “consequence” or something. If it’s possible to bankrupt the entire company rate it high, rate it higher if it’s possible to create a really sensational chain of events that leads up to the worst-case scenario. It helps if people can picture it. Keep in mind that it’s not helpful to get caught up in probability or frequency, people will think they are being tricked with pseudo-science.
- A score based on media coverage and label this “likelihood” or “threat”. The more breaches in the media that can be named, the higher the score. In this category, it helps to tie the particular finding to the breach, even if it’s entirely speculative.
Step 5: Fake the science
Multiply, add or create a look up table. If a table is used, be sure to make it in color with scary stuff being red and remember there is no green color in risk. If arithmetic is used, future variation could include weights or further breaking down the impact/likelihood categories. Note: Don’t get tangled up with proper math at this point, just keep making stuff up, it’s gotten us this far.
Step 6: Think Dashboard
Create categories from the output scores. It’s not important that it be accurate. Just make sure the categories are described with a lot of words. The more words that can be tossed at this section, the less likely people will be to read the whole thing, making them less likely to challenge it. Remember not to confuse decision makers with too many data points. After all they got to where they are because they’re all idiots, right?
Step 7: Go back and add credibility
One last step. Go back and put acronyms into the risk management process being created. It’s helpful to know what these acronyms mean, but don’t worry about what they represent, nobody else really knows either so nobody will challenge it. On the off chance someone does know these, just say it was more inspirational that prescriptive. By combining two or more of these, the process won’t have to look like any of them. Here’s a couple of good things to cite as feeding this process:
- ISO-31000, nobody can argue with international standards
- COBIT or anything even loosely tied to ISACA, they’re all certified, and no, it doesn’t matter that COBIT is more governance framework
- AS/NZ 4360:2004, just know it’s from Australia/New Zealand
- NIST-SP800-30 and 39, use them interchangeably
- And finally, FAIR because all the cool kids talk about it and it’s street cred
And there ya have it, 7 steps to a successful Risk Management Methodology. Let me know how these work out and what else can be modified so that all future promising young risk analysis upstarts can create a risk analysis approach without being confused by having the learn new things. The real beauty here is that people can do this simple approach with whatever irrelevant background they happen to have. Happy risking!
Secure 360 is a two-day security conference held every May in Saint Paul, MN and I’ve been helping with the speaker selection for 4 years in a row. This year is different though because I volunteered to co-chair the program committee. This year we had over 130 submissions for just over 50 speaking slots and a loose committee of about 20 volunteers. We’ve had a variety of approaches over the years, but I couldn’t help think that there must be a better way to do it.
I decided to tap all my connections and see how many other speaker coordinators I could talk to – I mean someone, somewhere must have “the secret”. So I hit twitter and sent emails. I made quite a few connections and got to talk to some good people. It was great to learn how most of the current conferences select their speakers. But it was disappointing to learn that none of them had anything better. Turns out there is no secret sauce and a wet-finger-in-the-wind is about as good as we’ve got.
However, I did pick up a few nuggets here and there. I pieced some things together and came up with a process that I think worked pretty good this year. So rather than keep it a secret I wanted to share how we selected speakers this year.
Step 1: Guiding the guesswork
Selecting speakers is mostly guess work. Submissions come in from everywhere and chances are good speakers are selected solely on the material they submit. Asking the right questions and drawing information out of potential speakers is important. I’ve also learned from previous years that limiting how much is drawn is almost just as important. There are a minority of speakers that like to publish a paper in every field so put a hard limit on the information gathered if that’s possible.
We modified our fields slightly from previous years. We asked for a brief synopsis up front, this is what would get published in the conference material, but then we asked for a detailed outline. I was hoping for more information and I wanted to see if we could deduce quality from that field. Honestly, the addition of the detail only helped in about 30% of the submissions or so. One of the best things we did is had up to five “learning points” and I found myself referring to those often. It was more often than not that the learning points showed more of the speakers intention than the verbose detailed outline. I highly suggest both though.
We also tried to accepting links to online videos. I figured that the more proficient speakers will have something online and we could watch them in action. Truth is we had less than 5% use of that field and of those, I don’t think many volunteers watched them let alone checked.
Step 2: Pre-Voting
There’s another step in there, “get a bunch of submissions,” but I’m skipping that. We’re pretty lucky that we had some skill programming on our website. We were able to do some good things like set up online voting. I had relatively few instructions for the voting:
- Accept a single vote (1-5) per voter, per session
- Accept a comment (140 characters) and tie to person (for questions/follow up)
- minimize clicking by voters and display all the necessary information on one page
In some previous years votes were accepted on both speaker and session, some other years multiple votes were collected. like “relevance” and “speaker knowledge”. I highly recommend to keep the voting dead simple and I cannot stress that enough. When it comes to step 4, the voting is purely one data point of many and it was often overruled.
Accepting comments was a stroke of brilliance that I picked up in my connections. We end up doing final speaker selection in a single day and not everyone can (or will) attend that session. I wanted to give everyone a chance to be heard and those comments enabled input from people that were not able to get out of the office on a Friday.
In previous years we had to click around to look at speaker bio and click back to session information. Getting through and voting on submissions is a chore. Every extra click would be compounded by the quantity of submissions – it had be easy or people would get burnt out quicker and less votes (and comments) would come in.
Step 3: Compile the results
This was hard. We usually physically get together to pick speakers and we need the speaker information to do that. I ended up getting in touch with my roots and writing perl code. I got a full mysql dump of the database and I broke about every good rule for developers to pull out and present the information I thought folks wanted. I knew this was mostly a one-shot deal (except for perhaps next year) so I wrote quick-n-dirty. I think it was about 15-20 hours, but in reality, it had to be much more. My code spit out html, I then opened that in MS Word and did some final formatting.
I set up two sections in the material, “At a Glance” and “Detailed Sessions”. I wanted a way to compare sessions quickly and yet offer a reference for details. I assumed most people would stare at summary information so I tried to fit as much information as I could in there. I’ll change up some names and give an example and walk through it.
I wanted show both the total votes (in this case “John” got 3-3’s, 5-4’s and 3-5’s, 3 was “okay”) and the overall score (I weighted as 1, 2, 3, 5, 8 and displayed the mean). Under the title and speaker name, I put the comments. We had several very chatty people who couldn’t make the meeting and like I said, it was great to get their input even though they could not attend. Now one very useful thing I did is that I compiled the feedback from previous years and included that here. In this case, “John” was rated in the top 25% (compared to other speakers that year). Using that historical data proved to be very, very helpful. In this example there are several good things here to select John on (that “1” on the left is his initial ranking in this category).
Step 4: Select Speakers
Not a whole lot of special sauce here. Sit down and start picking people. Some conference organizers were lucky enough to do this at a bar. We met at 8am and went to almost 4pm, so drinking during it didn’t seem like a wise option. We had about 12 people show up and we broke off into small groups to tackle the different categories. Then looped back together and looked at conflicts and overlaps. All in all a good experience and I’m leaving out a lot of the details so if you’re selecting speakers, please reach out and I’ll talk your ear off.
There are about as many definitions of risk as people you can ask and I’ve spent far too much energy pursuing this elusive definition but I think I can say, I’ve reached a good place. After all my reading, pontifications and discussions I feel that I am ready to answer the deceptively simple question “how do you define risk?” with this very simple answer:
I don’t know.
Oh I can toss things out there like “the probable frequency and probable magnitude of future loss” from the FAIR methodology. I could also wax philosophically about how I *mostly* agree with Douglas Hubbard’s well developed definition of “A state of uncertainty where some of the possibilities involve a loss” (note: I *mostly* agree just to pretend that I know something Mr. Hubbard doesn’t).
But if I don’t know, how can I say that I’ve reached a good place pursuing a risk definition? Because I have accepted the ambiguity and I’ve realized that terminology and definitions exist simply to help communicate concepts or ideas. That’s where we should be spending our efforts, behind the definitions. In that light, I have come to believe that definitions don’t have to be 100% right, they simply have to be helpful. Take the definition of risk from ISO 31000: “the effect of uncertainty on objectives”. That sounds cool, even after thinking about it for a while, but when it comes to being helpful? Nope, not even close. I may have an objective of defining risk and I’m immersed in uncertainty but I wouldn’t call the effect of that uncertainty “risk”. If anything, that definition leaves me more confused than when I started.
There’s some good news though, problems in defining central terms isn’t unique to risk. Take this from Melanie Mitchell:
In 2004 I organized a panel discussion on complexity at the Santa Fe Institute’s annual Complex Systems Summer School. It was a special year: 2004 marked the twentieth anniversary of the founding of the institute. The panel consisted of some of the most prominent members of the SFI faculty…all well-known scientists in fields such as physics, computer science, biology, economics and decision theory. The students at the school…were given the opportunity to ask any question of the panel. The first question was, “How do you define complexity?” Everyone on the panel laughed, because the question was at once so straightforward, so expected, and yet so difficult to answer.
She goes on in her book to say “Isaac Newton did not have a good definition of force” and “geneticists still do not agree on precisely what the term gene refers to at the molecular level.”
I take comfort in these stories, we are not unique, we are not alone.
As we move forward in the pursuit of information risk, let’s stay focused on where the real work should be done: measuring and communicating risk. Let’s put a little less effort on defining it just yet. Don’t’ get me wrong, definitions are helpful, but let’s not get all wrapped up in the precision of words when we’re still struggling with the concepts they are describing.
I’ve written about this topic at least a half dozen times now, I’ve saved each one as a draft and I’m giving up – I’m asking for help. I was inspired to do this by a video “Where Good Ideas Come From” (Bob Blakely posted the link in twitter). I can’t find the answer to this puzzle, at least not in any meaningful way, not by myself. I’ll break down where I am in the thought process and hope that I get some feedback. (note: For the purpose of this discussion I’m using “security” as a group of people and technology intended to protect assets.)
The goal of business is pretty well understood. For-profit companies are after profit and all the things that would affect that, reputation and customer confidence being on the top of the list for information security. From a business perspective, what I think is considered a successful security program is spending just enough on security and not too much. Spending too much on security should not be considered a success as well as failed security (though not equally). The goal isn’t perfect security, the goal is managed security. There is a point of diminishing returns in that spending, at some point there is just enough security.
I think of a production line manufacturing some physical widget. While it’d be really cool to have zero defects, most businesses spend just enough to make the product defects within some tolerance level. Translating to infosec, the goal from a business perspective is to spend enough (on security) to meet some level of business risk tolerance. That opens up a whole different discussion that I’ll avoid for now. But my point is that there should be a holistic view to information security. Since the goal of protecting information is only one variable to reach the goal of being profitable – there could easily be a good decision to increase spending and training for public relations staff to respond to any breach rather than preventing a specific subset of breaches themselves. Having the goal in mind enables those types of flexible trade offs.
Most every infosec talk I go to the goal appears to be security for the sake of security. In other words, the goal is to have not have security fail. The result is that the focus is shifted onto prevention and statements of risk stop short of being meaningful. “If X and Y happen an attacker will have an account on host Z.” is a statement on security, not risk. It’s a statement of action with impact to security not an impact to the broader goal. This type of focus devalues detective controls in the overall risk/value statement (everyone creates a mental measurement of risk/value in their own head). Once a detective control like logging is triggered in a breach, the security breach has occurred. The gap is in that the real reason we’re fighting—the bigger picture—the goal, hasn’t yet been impacted. However, and this is important, because the risk is perceived from a security perspective, emphasis and priorities are often misplaced. Hence, the question in the title. I don’t think we should be fighting for good security, we should be fighting for good-enough security.
I think this may be a special case where the goal is in fact security, but I have very little experience here. I won’t waste time pontificating on the goals for government. But this type of thing factors into the discussion. If infosec in government has a different goal then private enterprises, where are the differences and similarities?
The simple statement of “Compliance != Security” implies that the goal is security. What are we fighting for? It becomes pretty clear why some of the compliant yet “bad” security decisions were made if we consider that the goal wasn’t security. Compliance is a business concern, the correlation to infosec is both a blessing and curse.
Where am I heading?
So I’m seeing two major gaps as I type this. First thing is I don’t think there is any type of consensus around what our goal is in information security. My current thought is that perfect security is not the goal and that security is just a means to some other end. I think we should be focusing on where that end is and how we define “just enough” security in order to meet that. But please, help me understand that.
Second thing is the problem this causes, the “so what” of this post. We lack the ability to communicate security and consequently risk because we’re talking apples and oranges. I’ve been there, I’ve laid out a clear and logical case why some security thingamabob would improve security only to get some lame answer as to why I was shot down. I get that now. I wasn’t headed in the same direction as others in the conversation. The solution went towards my goal of security, not our goal of business. Once we’re all headed towards the same goals we can align assumptions and start to have more productive discussions.
For those who’ve watched that video I linked to in the opening, I’ve got half an idea. It’s been percolating for a long time and I can’t seem to find the trigger that unifies this mess. I’m putting this out there to hopefully trigger a response – a “here’s what I’m fighting for” type response. Because I think we’ve been heading in a dangerous direction focusing on security for the sake of security.
With the holidays upon us and all that happy-good-cheer crap going around, I thought I would try it and see if I couldn’t give back a little. Perhaps I could even spark a little introspection as we look toward the new year. Throughout the years I’ve picked up many little pearls of wisdom, and for those I haven’t forgotten, I’ve compiled them into my top 5 rules to live by (for infosec).
Rule 1: Don’t order steak in a burger joint.
This is always my number 1 rule and comes via my father growing up. Knowing how to adjust expectations is critical. Being aware of the surroundings and everyone’s capabilities is important. The steak reference is easy to picture and identify with, but this manifests itself daily and much more subtly. A stone castle can’t be built out of sand, and a problem can’t be solved if people don’t see it. It’s amazing and a little scary to realize how many mediocre burger joints there are.
Rule 2: Assume the hired help may actually want to help
Once there is awareness about the environment, understand that people generally want to do the right thing. This is a hard thing to accept in infosec because the job is full of people making bad decisions and it’s easy to make fun of “stupid” people and mentally stamp a FAIL on their forehead. But I found if I write off someone as incompetent I also write off the ability to learn from them. Once I made this mental shift I was surprised at how smart people can be and how much I can learn from others – especially in their moments of failures. Plus most problems have a more interesting root cause then negligence, if we can look for it.
Rule 3: Whatever you are thinking of doing it’s probably been done before, been done better, by someone smarter, and there is a book about it.
…or “Read early, read often.” This is critical to improving and adapting. Even if it hasn’t been done directly, then someone has done something similar, perhaps in some other field. Find out, look around, ask questions, talk to co-workers, neighbors, kids and pets. Sometimes finding things to imitate can come from weird places. If none of that works, it’s always possible to think up security analogies that involve a home, perhaps a car. (Note: please refrain from disclosing home/car analogies publicly, unless it’s for a comment on Schneier’s blog)
Rule 4: Don’t be afraid to look dumb.
Answering “I don’t know” is not only appropriate, it’s necessary. Get out on that dance floor and shake it like it you mean it. Because hey, anyone can look good doing the robot if they commit to it.
Rule 5: Find someone to mock you.
This is invaluable. Whether we realize it not, infosec is a nascent field. It’s relatively easy to look like a rock star, but detrimental to believe it. Having someone around to bring up Rule #3 (repeatedly) is very important because it removes complacency. There is always room for improvement.
So there we have it, the top 5 rules to live by (for infosec). I would be interested to know what rules others come back to. If anyone has some send them my way, because rule 3 does apply to lists of rules to live by.
I was reading an article in Information Week on some scary security thing, and I got to the one and only comment on the post:
Most Individuals and Orgs Enjoy "Security" as a Matter of Luck
Comment by janice33rpm Nov 16, 2010, 13:24 PM EST
I know the perception, there are so many opportunities to well, improve our security, that people think it’s a miracle that a TJX style breach hasn’t occurred to them a hundred times over and it’s only a matter of time. But the breech data paints a different story than “luck”.
As I thought about it, that word “luck” got stuck in my brain like some bad 80’s tune mentioned on twitter. I started to question, what did “lucky” really mean? People who win while gambling could be “lucky”, lottery winners are certainly “lucky”. Let’s assume that lucky then means beating the odds for some favorable outcome, and unlucky means unfavorable, but still defying the odds. If my definition is correct then the statement in the comment is a paradox. “Most” of anything cannot be lucky, if most people who played poker won then it wouldn’t be lucky to win, it would just be unlucky to lose. But I digress.
I wanted to understand just how “lucky” or “unlucky” companies are as far as security, so I did some research. According to Wolfram Alpha there are just over 23 Million businesses in the 50 United States and I consider being listed in something like datalossDB.org would indicate a measurement of “not enjoying security” (security fail). Using three years from 2007-2009, I pulled the number of unique businesses from the year-end reports on datalossDB.org (321, 431 and 251). Which means that a registered US company has about a 1 in 68,000 chance of ending up on datalossDB.org. I would not call those not listed as “lucky”, that would be like saying someone is “lucky” if they don’t get a straight flush dealt to them in 5-card poker (1 in 65,000 chance of that)
But this didn’t sit right with me. That was a whole lot of companies and most of them could just be a company on paper and not be on the internet. I turned to the IRS tax stats, they showed that in 2007, 5.8 million companies filed returns. Of those about 1 million listed zero assets, meaning they are probably not on the internet in any measurable way. Now we have a much more realistic number, 4,852,748 businesses in 2007 listed some assets to the IRS. If we assume that all the companies in dataloss DB file a return, that there is a 1 in 14,471 chance for a US company to suffer a PII breach in a year (and be listed in the dataloss DB).
Let’s put this in perspective, based on the odds in a year of a US company with assets appearing on dataloss DB being 1 in 14,471:
- If you are female, it is more likely that you’ll die in a transportation accident in a year. (1 in 10,170)
- It is more likely that a person will visit an emergency department due to an accident involving pens or pencils (1 in 13,300)
- (my favorite) It is more likely that a person will visit an emergency department due to an accident involving a grooming device (1 in 10,200)
Aside from being really curious what constitutes as a grooming device, I didn’t want to stop there, so let’s remove a major chunk of companies whose reported assets were under $500,000. 3.8 million companies listed less then $500k in their returns to the IRS in 2007, so that leaves 982,123 companies in the US with assets over $500k. I am just going to assume that those “small” companies aren’t showing in the dataloss stats.
Based on being a US Company with over $500,000 in assets and appearing in dataloss DB at least once (1 in 2,928):
- It is more likely that a person will visit an emergency department due to an accident involving home power tools or saws (1 in 2,795)
- It is more likely that a Hispanic female 12 or older will be the victim of a purse-snatching or pickpocketing (1 in 2,500)
- And finally, is is more likely that a person 6 or older will participate in a non-traditional triathlon in a year (1 in 2,912)
Therefore, I think it’s paradoxically safe to say:
Most Individuals do not participate in a non-traditional triathlon as a Matter of Luck.
Truth is, it all goes down to probability, specifically the probability of a targeted threat event occurring. In spite of that threat event being driven by an adaptive adversary, the actions of people occur with some measurable frequency. The examples here are pretty good at explaining this point. Crimes are committed by adaptive adversaries as well, and we can see that about one out of every 2,500 Hispanic females 12 or older, will experience a loss event from purse-snatching or pickpocketing per year. In spite of being able to make conscious decisions, those adversaries commit these actions with astonishing predictability. Let’s face it, while there appears to be randomness on why everyone hasn’t has been pwned to the bone, the truth is in the numbers and it’s all about understanding the probability.