Archive

Posts Tagged ‘numbers’

Big Numbers Aren’t the Problem

July 29, 2010 Comments off

I have two geek-loves, information risk analysis and cryptography.   In many ways the two are polar opposite because cryptography is at its core, binary math functions.  Data is either enciphered and deciphered correctly or it’s not.  Signatures either pass or fail verification.  Cryptography never makes use of Monte Carlo simulations and certainly  never has “medium” outcomes.

But let’s be honest, that is theoretical cryptography.  In the real world cryptography is drenched in uncertainties because the problem isn’t the math.  The problem is that we are implementing this this math on the same foundation as the rest of our fallible security controls.   Because of this shaky foundation, there is no binary pass/fail cryptography in the real world… it’s all about understanding the risks within the cryptosystems.

But let me back up and talk about the (false) perceptions of the math.   Cryptography deals with some really big stinking numbers, and we as human processors fail to correctly understand these large values.  One purpose I have here is to frame some of these big numbers into something we can begin to fathom.  Without the key itself, it is so unlikely to break modern ciphers that it should be considered impossible in decision making and we should focus elsewhere.

Not Just Numbers, Big F’n Numbersimg

When talking about large numbers and improbable events, it’s natural to refer to the lottery.  The “game” that has been politely referred to as the “tax on the mathematically challenged”.  At first I was thinking that people may not know the odds, because surely if people knew the chances of winning the jackpot in the Powerball is 1 in 195,249,054 they wouldn’t donate to the cause.  But that’s not the case, because those odds are clearly posted.  I think it’s more that people can’t understand what 195 million looks like.  People are incapable of wrapping their head around what that number signifies and how unlikely pulling 1 out 195 million truly is.  I think most people just here “it’s possible” and fail to comprehend the (lack of) probability.

There is a better chance of getting struck by lightening… twice, and there are plenty of other relative comparisons.  What if people knew that they would have a better chance of finding a four-leaf clover on their first attempt then winning the lottery?  What if I said they’d have a better chance of finding two four-leaf clovers on the first two attempts?  I wonder if people would shell out a dollar for two chances at finding two four-leaf clovers in a field for an early-retirement reward.

Now what if I start talking about cryptography and change those odds to something like 1 in 21,267,647,932,558,653,966,460,912,964,485,513,216?  Because those are the odds of picking the winning AES 128 bit lottery ticket.  If we can’t fathom 195 million, how can we possibly think of that number in context?  That number is 38 digits long! 

In an attempt to put this into perspective, let’s assume we had access to the Ohio Supercomputer Center and their massive supercomputer there.  It’s capable of 75 teraflops (that’s 75 trillion instructions per second).  Now let’s pretend that we were able to start it counting (1, 2, 3 etc) at the birth of the universe (estimated at 13.7 billion years ago).  So that exactly 1 second after the big bang, it has already counted to 75,000,000,000,000.  Where would we be today? 

86,400 seconds in a day * 366 days a year * 13.7 billion years * 75 teraflops =

32,492,016,000,000,000,000,000,000,000,000

That number is 32 digits long, not even close.  Keep in mind also this process would just count through the possible key values, it would take quite a bit more time to test for a correct key.  I don’t want to even compute the probability in lightning strikes.  Is it enough to say it’s ah, um… really stinking improbable to guess, or even attempt to brute force with the processing power we have today?

There is always a secret

This one of the main reasons that managing keys as well as the other security controls are much more important than all that math stuff.  It’s much more likely that an adversary will simply try to get a copy of a key, or otherwise work around the cryptography, rather than gain access to the Ohio supercomputer.  As Cryptologist Whitfield Diffie said:

“If breaking into Web sites, stealing identities or subverting critical infrastructure required breaking AES or elliptic-curve cryptosystems, we would not be complaining about cybersecurity.”

The line between the various data protection controls eventually blur together only differentiated by their varying degrees of obfuscation and assumptions.  We’ve got the math, and it’s so improbable to break it we can think of it as impossible.  It’s the other parts of our systems that we have to focus on like the key management.   But we have to realize that encryption is built on key management and key management faces the same problems as our other security controls since it is build on the same exact foundation.  There is too much mystery and misperception around that relatively simple concept.

But it’s not just encryption that we have misperception on, other technologies that promise to do good, like tokenization also fall prey to this problem.  All we’re doing is shifting around the secret within our existing framework of controls.  With encryption we shift the secret to the key and subsequently shift it to the access control around the key.  If there is a nervousness about exposing the key itself, we can supply some type of service to encrypt/decrypt, but then we’re just shifting the secret to the access control on the service.  Just as with a technology like tokenization, we’re shifting the secret to the token server, and subsequently the secret gets shifted to the access control on the tokenization service.  The only real difference between those is our perception of them.