I have two geek-loves, information risk analysis and cryptography. In many ways the two are polar opposite because cryptography is at its core, binary math functions. Data is either enciphered and deciphered correctly or it’s not. Signatures either pass or fail verification. Cryptography never makes use of Monte Carlo simulations and certainly never has “medium” outcomes.
But let’s be honest, that is theoretical cryptography. In the real world cryptography is drenched in uncertainties because the problem isn’t the math. The problem is that we are implementing this this math on the same foundation as the rest of our fallible security controls. Because of this shaky foundation, there is no binary pass/fail cryptography in the real world… it’s all about understanding the risks within the cryptosystems.
But let me back up and talk about the (false) perceptions of the math. Cryptography deals with some really big stinking numbers, and we as human processors fail to correctly understand these large values. One purpose I have here is to frame some of these big numbers into something we can begin to fathom. Without the key itself, it is so unlikely to break modern ciphers that it should be considered impossible in decision making and we should focus elsewhere.
Not Just Numbers, Big F’n Numbers
When talking about large numbers and improbable events, it’s natural to refer to the lottery. The “game” that has been politely referred to as the “tax on the mathematically challenged”. At first I was thinking that people may not know the odds, because surely if people knew the chances of winning the jackpot in the Powerball is 1 in 195,249,054 they wouldn’t donate to the cause. But that’s not the case, because those odds are clearly posted. I think it’s more that people can’t understand what 195 million looks like. People are incapable of wrapping their head around what that number signifies and how unlikely pulling 1 out 195 million truly is. I think most people just here “it’s possible” and fail to comprehend the (lack of) probability.
There is a better chance of getting struck by lightening… twice, and there are plenty of other relative comparisons. What if people knew that they would have a better chance of finding a four-leaf clover on their first attempt then winning the lottery? What if I said they’d have a better chance of finding two four-leaf clovers on the first two attempts? I wonder if people would shell out a dollar for two chances at finding two four-leaf clovers in a field for an early-retirement reward.
Now what if I start talking about cryptography and change those odds to something like 1 in 21,267,647,932,558,653,966,460,912,964,485,513,216? Because those are the odds of picking the winning AES 128 bit lottery ticket. If we can’t fathom 195 million, how can we possibly think of that number in context? That number is 38 digits long!
In an attempt to put this into perspective, let’s assume we had access to the Ohio Supercomputer Center and their massive supercomputer there. It’s capable of 75 teraflops (that’s 75 trillion instructions per second). Now let’s pretend that we were able to start it counting (1, 2, 3 etc) at the birth of the universe (estimated at 13.7 billion years ago). So that exactly 1 second after the big bang, it has already counted to 75,000,000,000,000. Where would we be today?
86,400 seconds in a day * 366 days a year * 13.7 billion years * 75 teraflops =
That number is 32 digits long, not even close. Keep in mind also this process would just count through the possible key values, it would take quite a bit more time to test for a correct key. I don’t want to even compute the probability in lightning strikes. Is it enough to say it’s ah, um… really stinking improbable to guess, or even attempt to brute force with the processing power we have today?
There is always a secret
This one of the main reasons that managing keys as well as the other security controls are much more important than all that math stuff. It’s much more likely that an adversary will simply try to get a copy of a key, or otherwise work around the cryptography, rather than gain access to the Ohio supercomputer. As Cryptologist Whitfield Diffie said:
“If breaking into Web sites, stealing identities or subverting critical infrastructure required breaking AES or elliptic-curve cryptosystems, we would not be complaining about cybersecurity.”
The line between the various data protection controls eventually blur together only differentiated by their varying degrees of obfuscation and assumptions. We’ve got the math, and it’s so improbable to break it we can think of it as impossible. It’s the other parts of our systems that we have to focus on like the key management. But we have to realize that encryption is built on key management and key management faces the same problems as our other security controls since it is build on the same exact foundation. There is too much mystery and misperception around that relatively simple concept.
But it’s not just encryption that we have misperception on, other technologies that promise to do good, like tokenization also fall prey to this problem. All we’re doing is shifting around the secret within our existing framework of controls. With encryption we shift the secret to the key and subsequently shift it to the access control around the key. If there is a nervousness about exposing the key itself, we can supply some type of service to encrypt/decrypt, but then we’re just shifting the secret to the access control on the service. Just as with a technology like tokenization, we’re shifting the secret to the token server, and subsequently the secret gets shifted to the access control on the tokenization service. The only real difference between those is our perception of them.
I read a post titled “Things that Cannot be Modelled Should not be Modelled” [sic], and to say I’m irritated is an understatement. The post itself is riddled with inconsistencies and leaps of logic, but there is an underlying concept that I struggle with, this notion of “why try?” This concept that we’ve somehow failed, so pack up and head home. I cannot understand this mentality. If there is one thing that history teaches over and over and over, it’s that humans are capable of amazing and astonishing feats and progress, many of which appeared to go against the prevailing concepts of logic or nature at the time.
Precisely Predicting versus Playing the odds
Nobody believes that modeling risk is an attempt to predict an exact future, it’s about probability and laying odds. As Peter L. Bernstein points out in “Against the Gods”, way back in history, the thought of laying odds to rolling dice was thought to be ridiculous. Of course the Gods controlled our fate, so of course, attempting to forecast the throw of dice (quantify risk) was absurd. I’m sure the first few folks who understood odds profited nicely. The concept of probability itself is relatively young in our history and attempting to quantify and model risk is even younger. Of course we’re going to make mistakes. And history shows us that people throughout time have considered these mistakes and acts of progress an affront to the laws of “nature”. I read that post by Jacek Marczyk in that light.
Fact is, every crazy pursuit by humankind has had critics and for good reason. Just focusing on the field of medicine can yield thousands of insanely beneficial and insanely insane acts of science. Books like “Elephants on Acid” show us that people do a lot of weird things and every once in a while, truly amazing things occur. We have the right to question, it’s our duty to challenge, but it is never an option to say don’t try. The whole point is we don’t give up. We plow forward and once we’ve reached a dead-end, we take a step back and try to move forward in another direction.
But back to the article. The articles lists things like “human nature” and “feelings” as “impossible to model”. I’m sure fields like Behavioral Economics and most every part of psychology would argue against that. Just reading through books like “Predictably Irrational” or “How We Know What isn’t So” we can see that human nature and even human feelings have been modeled and benefit has been found in that modeling. How prices get set and advertising campaigns are shaped have both benefitted from attempts in modeling human behavior and feelings. I’d also like to point out that the road to modeling those things is probably filled with a few set backs and failures, but that doesn’t make it not worth the effort.
Enough predication, let me get to my point. Being able to quantify or model risk is not a prize unto itself. The purpose of studying, researching and modeling risk is not to simply reach a pinnacle of holy-riskness. The pursuit of risk exists for one simple reason: to support the decision process. Let’s be honest here, those decisions are being made whether or not formal analysis is performed. The true benefit of risk management is to make better, more informed (and hopefully profitable) decisions. Saying that we should not attempt to model or quantify risk is like saying we should not try to make better decisions.
Going back to a flight example, when the Wright brothers made their first flight, did folks say “that flight didn’t make from New York to Chicago, so shut this whole flight concept down.” Perhaps they did, but those people would now be labeled naysayers, simpletons, perhaps even idiots.
One of the first things taught to furniture makers is to build a model first. Building models allows them to play out different ideas and see problems before they build. It allows them to try out different decisions before they actually make a decision. Even experienced designers wouldn’t fathom squaring up their first piece of wood without a model to educate themselves on the world they are about to construct. Hundreds of years of furniture makers have learned that the cost of not modeling outweighs the investment to model an idea. Models exists to support decisions, they attempt to answer questions around “what if.”
Without models of weather, we wouldn’t have any indication about storms or severe whether short of looking at the sky. We would not be able to accurately fly planes, nor navigate them accurately without models. We cannot argue that attempting to model our world to support the decision process is a waste of time. History has taught us that the only true failure is not trying in the first place.
To say that we shouldn’t try something based on some perception of the “laws of nature” or pure semantics is insulting. Now if someone wants to argue that our current set of risk models are broken, then bring that on. Step right up and let’s discuss those findings, let’s talk about better decisions and let’s alter the models. Perhaps we even toss our current models aside and start from scratch, but saying that we shouldn’t model at all is self-defeating and creates nothing but noise distracting from the real work. Not attempting to improve our decision process is not an option, attempting new approaches or identifying alterations is not only an option, it should be expected.
Jack Freund inspired this post with his “Executives are Not Stupid” post. I have climbed out of that geek-trap of thinking that decision makers were idiots. I’ve learned that, to Jack’s point, they generally are not idiots and to the contrary are usually more skilled and successful at decision making than the average person. But the science of making decisions is not restricted to a solo effort. I want to break down the decision process and point out where different roles may fit in. If someone in a technical role feels that decisions are poor, they should be aware of where they fit into the decision process and what influence they had (or didn’t have) on the decision. Once they realize that there is a process, however informal, they may begin to influence change by establishing themselves into the appropriate step, or even challenging decision makers on their process and helping them improve. Whether we know it or not, the goal of both business leaders and security wonks is to make good decisions.
Step 1: Establish a Frame
The first step is a critical step and often skipped over because people aren’t even aware that every decision has a frame. This initial step is to identify and communicate the context of the decision among those involved. Are we talking about DLP because data is leaking like a sieve? Is there a regulatory concern? Or is DLP “necessary” to give someone the illusion of progress? All those warrant DLP, but depending on the context we may have three completely different decisions.
Here’s an example, I recently was brought into a project to encrypt data in a database (which had me questioning the frame right away). I derailed the project completely by asking what problem they were trying to solve. Turns out the project to encrypt the data was initiated because the data was not encrypted now (seriously). The initial frame made the mistake of assuming encrypted data was a binary option, encrypted or not. By going back to establishing the context, we were able to better define the goal of the project and evaluate many other options that were not limited to encryption. The initial frame limited our options in our decision, and from a technical role I was able to influence the decision process by modifying the frame.
Step 2: Gather Data and Options
This is squarely were risk analysis lies. By discovering probabilities of events and related losses we can drive out a few options with associated costs and benefits. Other aspects of security programs have input in this step such as metrics, breach reports, vulnerability scans, audit findings, asset valuations and even expert opinion and intuition. It is the job of security professionals, to provide the options they see and data points that support those options. Not doing so can starve the decision maker of relevant information and cause decisions with even more uncertainty than they already have. It is the job of the decision maker to seek out expertise and to ask for data points thought to be relevant to be sure they have all options on the table. One interesting thing decision makers can do is starting asking for levels of confidence in data points, having that little extra question in there provides some really interesting context for the data being gathered. (forethought here as well: attempt to prepare for step 4 as options are discussed)
Step 3: Decide
I know this should be obvious, but the decision must be made. One thing I haven’t called out yet is all the biases and fallacies we as people are susceptible to. The best way to combat those are to become intimately familiar with them. Understand the difference between an availability bias and confirmation bias. Understand what the sunk-cost fallacy is. Being aware of those so that they may be identified in daily activities (and they come up daily). Because once you can identify them in reality you may be able to account for them in your decision process.
Step 4: Feedback
Just because the decision was made doesn’t mean the decision process is over. This fourth step is probably one of the most elusive steps and I struggle to find it in security. Once a decision is made, it is extremely beneficial to find out if it was a good decision, or if the decision had undesirable outcomes. We may go back and re-evaluate previous systems but rarely is that performed with previous decisions in mind, let alone with feedback to the decision makers. How can we know our DLP installation is meeting our goals? Was it the right decision to go with vendor A over B? Take a moment and think of feedback in recent decisions. Get much? If feedback was received it was probably all negative, but be sure to take it – not all failures were due to bad luck, sometimes we stink and owning that makes for better decisions in the future.
My point here is that decisions are complex, yet it is a skill that can be learned and improved on over time. There are many moving parts and each component in the decision process can cause unique problems if glossed over or skipped altogether. Keep in mind that these steps are scalable too. I’ve found myself standing downtown running through these steps in my head to decide on where to grab lunch. I’ve also thought through these steps to forecast and plan huge projects. The amount of time and effort in each step should be proportional to the girth of the decision.
Back to Jack’s point, if you are a technical person who thinks that some executive decision qualifies them as an card-carrying member of idiots-R-us. Take some time and think through what you know of their decision process. Were you able to provide quality data to them? What other data points did they consider? Do you have opportunity to gather and provide feedback on that decision down the road? Because like it or not, security decision making is a team effort and executives making routinely poor technical decisions may just need more active involvement and input from the technical staff.