Posts Tagged ‘mocking’

7 Steps to Risk Management Payout

March 27, 2011 2 comments

I was thinking about the plethora of absolutely crappy risk management methods out there and the commonalities they all end up sharing.  I thought I’d help anyone wanting to either a) develop their own internal methodology or b) get paid for telling others how to do risk management.  For them, I have created the following approach which enables people to avoid actually learning about information risk, uncertainty, decision science, cognitive biases or anything else usually important in creating and performing risk analysis.
The beauty of this approach is that it’s foundational. When people realize that it’s not actually helpful, it’s possible to build a new (looking) process by mixing up the terms, categories and breakdowns.  While avoiding learning, people can stay in their comfort zone and do the same approach over and over in new ways each time.  Everyone will be glad for the improvements until those don’t work out, then the re-inventing can occur all over again following this approach.

Here we go, the 7 steps to a risk management payout:

Step 1: Identify something to assess. 

Be it an asset, system, process or application.  This is a good area to allow permutations in future generations of this method by creating taxonomies and overly-simplified relationships between these items.

Step 2: Take a reductionist approach

Reduce the item under assessment into an incomplete list of controls from an external source.  Ignore the concept of strong emergence because it’s both too hard to explain and too hard for most anyone else to understand let alone think is real.  Note: the list of controls must be from an external source because they’re boring as all get-out to create from scratch and it gives the auditor/assessor an area to tweak in future iterations as well.  Plus, if this is ever challenged, it’s always possible to blame the external list of controls as being deficient.

Step 3: Audit Assess

Get a list of findings from the list of controls, but call them “risk items”.  In future iterations it’s possible to change up that term or even to create something called a “balanced scorecard”, doesn’t matter what that is, just make something up that looks different than previous iterations and go on.  Now it’s time for the real secret sauce.

Step 4: Categorize and Score (analyze)

Identify a list of categories on which to assess the findings and score each finding based on the category, either High/Medium/Low or 1-5 or something else completely irrelevant.   I suggest the following two top-level categories as a base because it seems to captures what everyone is thinking anyway:

  1. A score based on the worst possible case that may occur, label this “impact” or “consequence” or something.  If it’s possible to bankrupt the entire company rate it high, rate it higher if it’s possible to create a really sensational chain of events that leads up to the worst-case scenario.  It helps if people can picture it.  Keep in mind that it’s not helpful to get caught up in probability or frequency, people will think they are being tricked with pseudo-science.
  2. A score based on media coverage and label this “likelihood” or “threat”.  The more breaches in the media that can be named, the higher the score.  In this category, it helps to tie the particular finding to the breach, even if it’s entirely speculative.
Step 5: Fake the science

Multiply, add or create a look up table.  If a table is used, be sure to make it in color with scary stuff being red and remember there is no green color in risk.  If arithmetic is used, future variation could include weights or further breaking down the impact/likelihood categories. Note: Don’t get tangled up with proper math at this point, just keep making stuff up, it’s gotten us this far. 

Step 6: Think Dashboard

Create categories from the output scores.  It’s not important that it be accurate.  Just make sure the categories are described with a lot of words.  The more words that can be tossed at this section, the less likely people will be to read the whole thing, making them less likely to challenge it.  Remember not to confuse decision makers with too many data points.  After all they got to where they are because they’re all idiots, right?

Step 7: Go back and add credibility

One last step.  Go back and put acronyms into the risk management process being created.  It’s helpful to know what these acronyms mean, but don’t worry about what they represent, nobody else really knows either so nobody will challenge it.  On the off chance someone does know these, just say it was more inspirational that prescriptive.  By combining two or more of these, the process won’t have to look like any of them.  Here’s a couple of good things to cite as feeding this process:

  • ISO-31000, nobody can argue with international standards
  • COBIT or anything even loosely tied to ISACA, they’re all certified, and no, it doesn’t matter that COBIT is more governance framework
  • AS/NZ 4360:2004, just know it’s from Australia/New Zealand
  • NIST-SP800-30 and 39, use them interchangeably
  • And finally, FAIR because all the cool kids talk about it and it’s street cred
  • And there ya have it, 7 steps to a successful Risk Management Methodology.  Let me know how these work out and what else can be modified so that all future promising young risk analysis upstarts can create a risk analysis approach without being confused by having the learn new things.  The real beauty here is that people can do this simple approach with whatever irrelevant background they happen to have.  Happy risking!

Categories: Humor, Risk Tags: ,