Поделиться через


Are we too simplistic in how we think about risk?

Yesterday I had a fascinating meeting where we discussed a number of theoretical concepts, including how we think about risk. Risk, of course, should be the driver in everything we do in information security, and risk management should be the discipline that guides us.

The problem with risk is that it is a very nebulous concept. Humans, and those of the management persuasion in particular, need more detail to make decisions. Consequently, we have methods for quantifying risk, such as the annualized loss expectancy (ALE) formula:

ALE=SLE*ARO

where SLE is the Single Loss Expectancy, or the cost of a single loss event, and ARO is the annualized rate of occurence, or the probability of a loss event in a year. Toghether, the ALE gives us a dollar cost per year of some risk.

The problem with thinking about risk solely in terms of the ALE is that it is far too simplistic. In another article I am working on developing the concept that if we implement any type of mitigation, it will modify the item we are securing. In other words, we need to also consider the impact of the mitigation of a risk item.

There are two ways mitigation measures impact us. The first is the cost to implement the mitigation itself. Ideally that cost should be certain, so let us call the cost Cm.

The second way the mitigation impacts us is in its side-effects. For example, if you require anti-virus software on all computers those computers may slow down, impacting productivity. There is only a chance that this will happen, so we also need a probability factor involved. Let us call the side-effect Sm and the probability Ps.

Putting all that together, we get a risk equation that looks like this:

Risk = SLE*ARO - [Cm + Sm*Ps]

This takes into account the cost of actually doing something about the risk. It says nothing, of course, about how we develop the measurements, nor about what is acceptable and what is not. Those items, as they say, are topics for further research.

Comments

  • Anonymous
    May 09, 2006
    Another complexity to consider is the cost if a "once in a century" event happens in your first year of operation.
    Should your business shut down completely as a result of an unlikely, but expensive, disaster or security breach in its first year?
  • Anonymous
    May 09, 2006
    Personally I think risk management is very very important, but as soon as you start to calculate you make it yourself too easy and ultimatively lose.

    It is better to do some rule-of-thumb weighting. It is however important to document your decisions.
  • Anonymous
    May 10, 2006
    Hey Jesper,

    I like your modifications to the risk formula, but I still think that we are stuck with the problem that a quantitative risks analysis approach is rather difficult with the unreliable and inaccuracy of the supporting data to fill in the variables.

    Its already difficult to calculate the SLE within most organizations, as is the ARO. Rarely do we find valid statistics we can apply for ARO, since most of it isn't measured correctly. Measuring the actual cost/value of assets is never any eaiser. And when you add performance metrics with the side effects of the safe guards, I fear that the results of the formula move us farther away from the real understanding of the risks.

    I look forward to future research in this area. If you really come across a method to reduce the problems with filling in the variables in even a modified quantitative risk analysis formula, it would go a long way to weigh risks more efficiently for business. Keep me posted.
  • Anonymous
    May 10, 2006
    Jesper: I think here should be also taken into consideration things like cost of the security improvement in x years time, because simple cost calculation may show, that mitigating this risk (the cost to implement) is higher then expected ALE, but in two years time this small improvement may seem profitable. Of course there is always an option of badly assigned ARO and Ps, which BTW are v. hard to calculate correctly :|

    Alun: I'd say, that things you mentioned should be considered annualy, with probability factor lowered - because there are things that happen once a 10*e^x (like sudden earthquake with epicentre right under your data center). If it happens in the first year though, you had bad luck - sorry ;-)
  • Anonymous
    May 10, 2006
    Jesper,

    You probably don't remember me. You kindly signed my copy of 'Protecting your Windows network from perimeter to data" at the end of a seminar you presented with Steve Lamb (who is a good friend of mine now!) at IT Forum back in Nov 2005.

    At the time I was a Security consultant with Lloyds TSB, now with Citigroup.

    I must say how much I enjoyed reading your and SR's book. I have read it from cover to cover, and thought it's candour was spot on. It’s not often a security book has me laughing out loud on a train!

    I must say, as I started reading through the book, many of your attitudes and lines of reasoning are ones I sincerely believe in. In fact at one point I was wondering if I was going to read anything that challenged my views on Microsoft security!

    The further I read into the book, I cam across many suggestions which I hadn't considered. Items such as the futility of renaming administrators accounts (RID 500) and the vital need to work from Policy to Standards to Operating Procedures were very useful.

    From a generic point of view, the book has reminded me of how security is very much a 'tailored solution' and that 'best practice' by it's very nature weakens our security if everyone were to apply it without consideration for exactly what threats we are attempting to mitigate against!

    Once again, great book, please feel free to pass these comment's on to Steve Riley, someone else whom I have a great deal of respect for (but no email address!).

    Best Regards,

    Paul
  • Anonymous
    May 10, 2006
    Alun, the risk management formula in itself does not cover the decision making, only the data production that leads to the decision making. In the end, even the modified formula would tell you that you should only accept those risks that meet some cost threshold. The problem is that the formula says nothing about what the cost threshold is. Your risk management philosophy would have to do that; specifically the part about how risk averse you are. If you have that in place already, which may be a big if, then you would have a policy that says "If the expected loss, minus the costs and effect of mitigation, exceeds X then we mitigate." The problem is what X is, and that is something only the risk owner can decide.
  • Anonymous
    May 15, 2006
    The main problem I have with mathematical calculations of risk is that many of the risks have a binary ARO. Take the risk of not patching an Internet facing server, for instance. You can't reasonably say the risk of server compromise is once every 2 or 3 years, even tho it may be 2 or 3 years before the server is compromised. Instead, the server will run uncompromised for a while, and then be trashed over and over again until you do something about it.

    Therefore, the risk "calculation" becomes a matter of patching now before the first compromise. Or after.