Archive

Archive for the ‘Psychology’ Category

A Call to Arms: It is Time to Learn Like Experts

November 23, 2011 3 comments

I had an article published in the November issue of the ISSA journal by the same name as this blog post.  I’ve got permission to post it to a personal webpage, so it is now available here. 

The article begins with a quote:

When we take action on the basis of an [untested] belief, we destroy the chance to discover whether that belief is appropriate. – Robin M. Hogarth

That quote from his book, “Educating Intuition” and it really caught the essence of what I see as the struggles in information security.  We are making security decisions based on what we believe and then we move onto the Next Big Thing without seeking adequate feedback.  This article is an attempt to say that whatever you think of the “quant” side of information security needs to be compared to the what we have without quants – which is an intuitive approach.  What I’ve found in preparing for this article is that the environment we work in is not conducive to developing a trustworthy intuition on its own.  As a result, we have justification in challenging unaided opinion when it comes to risk-based decisions and we should be building feedback loops into our environment.

Have a read.  And by all means, feedback is not only sought, it is required.

Advertisements
Categories: Decisions, Psychology, Risk Tags:

Improvements Lie Between Theory and Reality

October 1, 2010 5 comments

Every once in a while I come across something someone has written that really pokes my brain.  When that happens I become obsessed and I allow myself to be consumed by whatever Google and Wikipedia dish out, which ultimately will lead to whatever articles or books I can get my hands on.   The latest poking-prose is from Alex Hutton over on the Verizon Business Security Blog in a piece titled “Evidence Based Risk Management & Applied Behavioral Analysis.”  At first, I wanted to rehash what I picked up from his post, but I think I’ll talk about where I ended up with it.

To set some perspective, I want to point out that people follow some repeatable process in their decisions.  However, those decisions are often not logical or rational.  In reality there is a varying gap between what science or logic would tell us to do and what we, as heuristic beings, actually do.  Behavioral Economics, as Alex mentioned, is a field focused on observing how we make choices within an economics frame, and attempting to map out the rationale in our choices.  Most of the advances in marketing are based on this fundamental approach – figure out what sells and use it to sell.   I think accounting for human behavior is so completely under-developed in security that I’ve named this blog after it.

But just focusing on behaviors is not enough, we need context, we need a measuring stick to compare it again.  We need to know where the ideal state lies so we know how we are diverging from it.  I found a quote that introduces some new terms and summarized what I took away from Alex’s post.  It’s from Stephen J. Hoch and Howard C. Kunreuther from the Wharton School and published in “Wharton on Making Decisions.”  Within decision science (and I suspect most other sciences) there are three areas to focus the work to be done and it’s described like this:

The approach to decision making we are taking can be viewed at three different levels – what should be done based on rational theories of choice (normative models), what is actually done by individuals and groups in practice (descriptive behavior), and how we can improve decision making based on our understanding about differences between normative models and descriptive behavior (prescriptive recommendations).

From the view at my cheap seat, we stink at all three of these in infosec.  Our goal is prescriptive recommendations, we want to be able to spend just enough on security and in the right priority.  Yet our established normative models and our ability to describe behavior are lacking.   We are stuck with this “do all of these controls” advice, without reason, without priority and without context.  It just doesn’t get applied well in practice.  So let’s back and look at our models (our theory).  In order to develop better models, we need research and the feedback provided by evidence based risk management to develop what we should be doing in a perfect world (normative models).  Then we need behavioral analysis to look at what we do in reality that works or doesn’t work  (descriptive behavior).  Because we will find that how we react to and mitigate infosec risks will diverge from a logical approach if we are able to define what a logical approach is supposed to look like in the first place. 

Once we start to refine our normative models and understand the descriptive behavior, then and only then will we be able to provide prescriptive and useful recommendations.

Big Numbers Aren’t the Problem

July 29, 2010 Comments off

I have two geek-loves, information risk analysis and cryptography.   In many ways the two are polar opposite because cryptography is at its core, binary math functions.  Data is either enciphered and deciphered correctly or it’s not.  Signatures either pass or fail verification.  Cryptography never makes use of Monte Carlo simulations and certainly  never has “medium” outcomes.

But let’s be honest, that is theoretical cryptography.  In the real world cryptography is drenched in uncertainties because the problem isn’t the math.  The problem is that we are implementing this this math on the same foundation as the rest of our fallible security controls.   Because of this shaky foundation, there is no binary pass/fail cryptography in the real world… it’s all about understanding the risks within the cryptosystems.

But let me back up and talk about the (false) perceptions of the math.   Cryptography deals with some really big stinking numbers, and we as human processors fail to correctly understand these large values.  One purpose I have here is to frame some of these big numbers into something we can begin to fathom.  Without the key itself, it is so unlikely to break modern ciphers that it should be considered impossible in decision making and we should focus elsewhere.

Not Just Numbers, Big F’n Numbersimg

When talking about large numbers and improbable events, it’s natural to refer to the lottery.  The “game” that has been politely referred to as the “tax on the mathematically challenged”.  At first I was thinking that people may not know the odds, because surely if people knew the chances of winning the jackpot in the Powerball is 1 in 195,249,054 they wouldn’t donate to the cause.  But that’s not the case, because those odds are clearly posted.  I think it’s more that people can’t understand what 195 million looks like.  People are incapable of wrapping their head around what that number signifies and how unlikely pulling 1 out 195 million truly is.  I think most people just here “it’s possible” and fail to comprehend the (lack of) probability.

There is a better chance of getting struck by lightening… twice, and there are plenty of other relative comparisons.  What if people knew that they would have a better chance of finding a four-leaf clover on their first attempt then winning the lottery?  What if I said they’d have a better chance of finding two four-leaf clovers on the first two attempts?  I wonder if people would shell out a dollar for two chances at finding two four-leaf clovers in a field for an early-retirement reward.

Now what if I start talking about cryptography and change those odds to something like 1 in 21,267,647,932,558,653,966,460,912,964,485,513,216?  Because those are the odds of picking the winning AES 128 bit lottery ticket.  If we can’t fathom 195 million, how can we possibly think of that number in context?  That number is 38 digits long! 

In an attempt to put this into perspective, let’s assume we had access to the Ohio Supercomputer Center and their massive supercomputer there.  It’s capable of 75 teraflops (that’s 75 trillion instructions per second).  Now let’s pretend that we were able to start it counting (1, 2, 3 etc) at the birth of the universe (estimated at 13.7 billion years ago).  So that exactly 1 second after the big bang, it has already counted to 75,000,000,000,000.  Where would we be today? 

86,400 seconds in a day * 366 days a year * 13.7 billion years * 75 teraflops =

32,492,016,000,000,000,000,000,000,000,000

That number is 32 digits long, not even close.  Keep in mind also this process would just count through the possible key values, it would take quite a bit more time to test for a correct key.  I don’t want to even compute the probability in lightning strikes.  Is it enough to say it’s ah, um… really stinking improbable to guess, or even attempt to brute force with the processing power we have today?

There is always a secret

This one of the main reasons that managing keys as well as the other security controls are much more important than all that math stuff.  It’s much more likely that an adversary will simply try to get a copy of a key, or otherwise work around the cryptography, rather than gain access to the Ohio supercomputer.  As Cryptologist Whitfield Diffie said:

“If breaking into Web sites, stealing identities or subverting critical infrastructure required breaking AES or elliptic-curve cryptosystems, we would not be complaining about cybersecurity.”

The line between the various data protection controls eventually blur together only differentiated by their varying degrees of obfuscation and assumptions.  We’ve got the math, and it’s so improbable to break it we can think of it as impossible.  It’s the other parts of our systems that we have to focus on like the key management.   But we have to realize that encryption is built on key management and key management faces the same problems as our other security controls since it is build on the same exact foundation.  There is too much mystery and misperception around that relatively simple concept.

But it’s not just encryption that we have misperception on, other technologies that promise to do good, like tokenization also fall prey to this problem.  All we’re doing is shifting around the secret within our existing framework of controls.  With encryption we shift the secret to the key and subsequently shift it to the access control around the key.  If there is a nervousness about exposing the key itself, we can supply some type of service to encrypt/decrypt, but then we’re just shifting the secret to the access control on the service.  Just as with a technology like tokenization, we’re shifting the secret to the token server, and subsequently the secret gets shifted to the access control on the tokenization service.  The only real difference between those is our perception of them.

Name Change: Behavioral Security

May 23, 2010 Comments off

I’m changing the title of this blog from something I picked because I had a blank field in front of me to “Behavioral Security”.  Partly because it’s less to type, but also I think it is much more to the point of my thinking: information security is mostly about human behavior.  In order to make improvements we need account for the humans that think, create, install, use, analyze and break the technology and processes protecting our goody baskets. 

Behavioral Security uses social, cognitive and emotional factors in understanding the decisions of individuals and institutions in the management of information.

I largely took that from the Wikipedia definition for “behavioral economics” and I think it needs some tuning, but yeah, that’s the theory I’m sticking with for now.

Biases and Fallacies: Infosec Style

May 18, 2010 1 comment

I’m starting a blog and rather than give you some long drawn out intro into who I am and why I think I have something to say I’ll just jump right in right after this short introduction:  I enjoy information security, I enjoy cryptography and I enjoy discussions on risk.

On to my point here.

One of the biggest problems in security is human element. Wait, let me rephrase that, one of the biggest areas I think security could be improved is by accounting for the human element.  Humans are full of logical fallacies and cognitive biases.  They are horrible at collecting, storing and retrieving information (especially sensitive information) and they stink as cryptographic processors.  Yet, they are essential components and have one great quality: they can be predictableFallacy Detective

One of the things we can do as owner/operator of one of these machines is to understand how we work (and conversely don’t work).  To that end, I’ve read (and re-read) various articles, sites and books about our biases and fallacies.  There are plenty of good sources out there, but the only way I’ve found to overcome biases and fallacies is to learn about them and try to spot them when we encounter them, and most everyone experiences these daily.  Please feel free to take some time, read through the lists and try to keep an eye out for them in your daily routine.

To kick things off, the infosec industry has some easy-to-spot biases.  We have the exposure effect of best practices, and the zero-risk bias is far too prevalent, especially in cryptography.  We have enormous problems with poorly anchored decisions (focusing on the latest media buzz-words) and also we are drowning in the the Van Restorff effect on sensational security breaches.  But that’s not what I’m really going to talk about.  As I was reading the Wikipedia entry on the List of Cognitive Biases, I thought some of them sounded completely made up.  That got me thinking… Hey! I could make some up too.  We must have our own biases, errors and fallacies, right?  I mean, we have enough false consensus that we can think we’re unique and have our own unique biases and fallacies, right?

After one relatively quiet evening full of self-entertainment, I have come up with my first draft of security biases and fallacies (I will spare Wikipedia my edits).  Some of these are not exclusive to infosec, but made the list anyway.  In no particular order…

Defcon Error: Thinking that there are only two modes that computers operate in: Broken-with-a-known-exploit and Broken-without-a-known-exploit.

Moscone Effect: an extension of the Defcon effect but additionally thinking there is a product/service to help with that problem.

PEBCAK-Attribution Error: thinking that security would be easy if it weren’t for those darn users.

Pavlov’s Certificate Error: Thinking that it is somehow beneficial for users to acknowledge nothing but false-positives.

Reverse-Hawthorne Effect:  The inability to instigate change in spite of demonstrating how broken everything is, also known as the metasploit fallacy.

Underconfidence effect: The inability to instigate change in spite of rating the problem a “high”, also referred to as risk management.

The Me-Too error: Reciting the mantra “compliance isn’t security”, but then resorting to sensational stories, unfounded opinions and/or gut feel.

CISSP Error: ‘nuf said

The Cricket-Sound Error: thinking that other I.T. professionals value a secure system over an operational system.

The Not-My-Problem Problem: Thinking that administrators will weigh all the configuration options carefully before selecting the best one (rather than the first one that doesn’t fail).

Policy Fallacy:  a logical fallacy that states: Companies produce policies, employees are supposed to read policies, therefore employees understand company policies.

The Mighty Pen Fallacy: an extension of the Policy Fallacy, thinking that just by writing something into a policy (or regulation), that dog will poop gold.

Labeled-Threat bias: Thinking that a pre-existing threat must be solved through capitol expenditure once it has a label, also known as the the “APT bias”, more common among sales, marketing, some media and uninitiated executives.

Illusory Tweeting: thinking the same media used to instantaneously report political events and international crisis’ around the world is also well suited for pictures of your cat.

Security-thru-Google Fallacy: I don’t have one for this but the name is here for when I think of it.

Encryption Fallacy:  if encrypting the data once is good, then encrypting it more than once must be better.

Tempest Bias: the opposite of a zero-risk error: the incorrect thinking that rot13 doesn’t have a place in corporate communications.

I know there are more…  Happy Birthday blog o’ mine.