Archive

Archive for the ‘Humor’ Category

7 Steps to Risk Management Payout

March 27, 2011 2 comments

I was thinking about the plethora of absolutely crappy risk management methods out there and the commonalities they all end up sharing.  I thought I’d help anyone wanting to either a) develop their own internal methodology or b) get paid for telling others how to do risk management.  For them, I have created the following approach which enables people to avoid actually learning about information risk, uncertainty, decision science, cognitive biases or anything else usually important in creating and performing risk analysis.
The beauty of this approach is that it’s foundational. When people realize that it’s not actually helpful, it’s possible to build a new (looking) process by mixing up the terms, categories and breakdowns.  While avoiding learning, people can stay in their comfort zone and do the same approach over and over in new ways each time.  Everyone will be glad for the improvements until those don’t work out, then the re-inventing can occur all over again following this approach.

Here we go, the 7 steps to a risk management payout:

Step 1: Identify something to assess. 

Be it an asset, system, process or application.  This is a good area to allow permutations in future generations of this method by creating taxonomies and overly-simplified relationships between these items.

Step 2: Take a reductionist approach

Reduce the item under assessment into an incomplete list of controls from an external source.  Ignore the concept of strong emergence because it’s both too hard to explain and too hard for most anyone else to understand let alone think is real.  Note: the list of controls must be from an external source because they’re boring as all get-out to create from scratch and it gives the auditor/assessor an area to tweak in future iterations as well.  Plus, if this is ever challenged, it’s always possible to blame the external list of controls as being deficient.

Step 3: Audit Assess

Get a list of findings from the list of controls, but call them “risk items”.  In future iterations it’s possible to change up that term or even to create something called a “balanced scorecard”, doesn’t matter what that is, just make something up that looks different than previous iterations and go on.  Now it’s time for the real secret sauce.

Step 4: Categorize and Score (analyze)

Identify a list of categories on which to assess the findings and score each finding based on the category, either High/Medium/Low or 1-5 or something else completely irrelevant.   I suggest the following two top-level categories as a base because it seems to captures what everyone is thinking anyway:

  1. A score based on the worst possible case that may occur, label this “impact” or “consequence” or something.  If it’s possible to bankrupt the entire company rate it high, rate it higher if it’s possible to create a really sensational chain of events that leads up to the worst-case scenario.  It helps if people can picture it.  Keep in mind that it’s not helpful to get caught up in probability or frequency, people will think they are being tricked with pseudo-science.
  2. A score based on media coverage and label this “likelihood” or “threat”.  The more breaches in the media that can be named, the higher the score.  In this category, it helps to tie the particular finding to the breach, even if it’s entirely speculative.
Step 5: Fake the science

Multiply, add or create a look up table.  If a table is used, be sure to make it in color with scary stuff being red and remember there is no green color in risk.  If arithmetic is used, future variation could include weights or further breaking down the impact/likelihood categories. Note: Don’t get tangled up with proper math at this point, just keep making stuff up, it’s gotten us this far. 

Step 6: Think Dashboard

Create categories from the output scores.  It’s not important that it be accurate.  Just make sure the categories are described with a lot of words.  The more words that can be tossed at this section, the less likely people will be to read the whole thing, making them less likely to challenge it.  Remember not to confuse decision makers with too many data points.  After all they got to where they are because they’re all idiots, right?

Step 7: Go back and add credibility

One last step.  Go back and put acronyms into the risk management process being created.  It’s helpful to know what these acronyms mean, but don’t worry about what they represent, nobody else really knows either so nobody will challenge it.  On the off chance someone does know these, just say it was more inspirational that prescriptive.  By combining two or more of these, the process won’t have to look like any of them.  Here’s a couple of good things to cite as feeding this process:

  • ISO-31000, nobody can argue with international standards
  • COBIT or anything even loosely tied to ISACA, they’re all certified, and no, it doesn’t matter that COBIT is more governance framework
  • AS/NZ 4360:2004, just know it’s from Australia/New Zealand
  • NIST-SP800-30 and 39, use them interchangeably
  • And finally, FAIR because all the cool kids talk about it and it’s street cred
  • And there ya have it, 7 steps to a successful Risk Management Methodology.  Let me know how these work out and what else can be modified so that all future promising young risk analysis upstarts can create a risk analysis approach without being confused by having the learn new things.  The real beauty here is that people can do this simple approach with whatever irrelevant background they happen to have.  Happy risking!

Categories: Humor, Risk Tags: ,

Biases and Fallacies: Infosec Style

May 18, 2010 1 comment

I’m starting a blog and rather than give you some long drawn out intro into who I am and why I think I have something to say I’ll just jump right in right after this short introduction:  I enjoy information security, I enjoy cryptography and I enjoy discussions on risk.

On to my point here.

One of the biggest problems in security is human element. Wait, let me rephrase that, one of the biggest areas I think security could be improved is by accounting for the human element.  Humans are full of logical fallacies and cognitive biases.  They are horrible at collecting, storing and retrieving information (especially sensitive information) and they stink as cryptographic processors.  Yet, they are essential components and have one great quality: they can be predictableFallacy Detective

One of the things we can do as owner/operator of one of these machines is to understand how we work (and conversely don’t work).  To that end, I’ve read (and re-read) various articles, sites and books about our biases and fallacies.  There are plenty of good sources out there, but the only way I’ve found to overcome biases and fallacies is to learn about them and try to spot them when we encounter them, and most everyone experiences these daily.  Please feel free to take some time, read through the lists and try to keep an eye out for them in your daily routine.

To kick things off, the infosec industry has some easy-to-spot biases.  We have the exposure effect of best practices, and the zero-risk bias is far too prevalent, especially in cryptography.  We have enormous problems with poorly anchored decisions (focusing on the latest media buzz-words) and also we are drowning in the the Van Restorff effect on sensational security breaches.  But that’s not what I’m really going to talk about.  As I was reading the Wikipedia entry on the List of Cognitive Biases, I thought some of them sounded completely made up.  That got me thinking… Hey! I could make some up too.  We must have our own biases, errors and fallacies, right?  I mean, we have enough false consensus that we can think we’re unique and have our own unique biases and fallacies, right?

After one relatively quiet evening full of self-entertainment, I have come up with my first draft of security biases and fallacies (I will spare Wikipedia my edits).  Some of these are not exclusive to infosec, but made the list anyway.  In no particular order…

Defcon Error: Thinking that there are only two modes that computers operate in: Broken-with-a-known-exploit and Broken-without-a-known-exploit.

Moscone Effect: an extension of the Defcon effect but additionally thinking there is a product/service to help with that problem.

PEBCAK-Attribution Error: thinking that security would be easy if it weren’t for those darn users.

Pavlov’s Certificate Error: Thinking that it is somehow beneficial for users to acknowledge nothing but false-positives.

Reverse-Hawthorne Effect:  The inability to instigate change in spite of demonstrating how broken everything is, also known as the metasploit fallacy.

Underconfidence effect: The inability to instigate change in spite of rating the problem a “high”, also referred to as risk management.

The Me-Too error: Reciting the mantra “compliance isn’t security”, but then resorting to sensational stories, unfounded opinions and/or gut feel.

CISSP Error: ‘nuf said

The Cricket-Sound Error: thinking that other I.T. professionals value a secure system over an operational system.

The Not-My-Problem Problem: Thinking that administrators will weigh all the configuration options carefully before selecting the best one (rather than the first one that doesn’t fail).

Policy Fallacy:  a logical fallacy that states: Companies produce policies, employees are supposed to read policies, therefore employees understand company policies.

The Mighty Pen Fallacy: an extension of the Policy Fallacy, thinking that just by writing something into a policy (or regulation), that dog will poop gold.

Labeled-Threat bias: Thinking that a pre-existing threat must be solved through capitol expenditure once it has a label, also known as the the “APT bias”, more common among sales, marketing, some media and uninitiated executives.

Illusory Tweeting: thinking the same media used to instantaneously report political events and international crisis’ around the world is also well suited for pictures of your cat.

Security-thru-Google Fallacy: I don’t have one for this but the name is here for when I think of it.

Encryption Fallacy:  if encrypting the data once is good, then encrypting it more than once must be better.

Tempest Bias: the opposite of a zero-risk error: the incorrect thinking that rot13 doesn’t have a place in corporate communications.

I know there are more…  Happy Birthday blog o’ mine.