Measuring Risk Objectively?

In order to manage the complexity of life and the accompanying uncertainties, we build models.  Models by their very nature are reductions, that is, we throw out a certain amount of information.  A historian writing a history of Frankfurt, Germany is not going to concern himself with spots on the floor of the Rathaus in 1888 (unless he is a post-modern reductionist).

Risk is itself an abstraction, it is certainly not real.  Being the victim of a  specific risk, however, is real enough.  A more interesting topic is whether or not risk is objective or subjective.  How we measure matters.  It may impress to show on a slide that the mail gateway anti-virus blocked ten million attempts in the last year, but it matters little when the consequences of a single failure can end the business.

The U.S. legal scholar Cass Sunstein, who coined the term “libertarian paternalism” has commented on how small risks can become distorted in the mind of the public and amplified to the point (normally via mass media) that they influence public policy.  He uses the terms “availability cascade” (from the availability bias) and “probability neglect” to describe the basis for the entire process. The exact same thing happens in any organization where one bad experience leads to ridiculous changes in policy.  In the US think Love Canal or Times Beach.

So when we model a certain risk, it is often driven by emotion or prejudice and key elements are included/excluded.  It may take years to identify the errors.  I could be wrong but I do not think that risk can be measured objectively even with panels of experts since they are subject to the same problems as the lumpenproletariat they feel superior to, bias, group-think, emotional amplification, poor statistical reasoning, priors etc. Because of this, I agree with Paul Slovic, risk is subjective.

Advertisements

Data, Knowledge and Risk Part I

One of the many ways that a centralized IAM initiative lowers your risk is by forcing many different departments into reducing undocumented courses of action into machine interpretable decisions.  For example, prior to the IAM system, an action may have been to get a phone call and take an action, or perhaps “go ask x”.  Knowledge which resides outside the system, that the system requires must be logically organized and input into the system.  Sometimes you find a situation where the logic used cannot be easily reduced to a truth test because the informaton required is either missing, unknown or inaccurate.  When this happens, decisions need to be made whether or not it is useful to fix or fill in the information.  This is where things get interesting, where IT needs meet political realities and you hear customer’s say, “Let’s let that one go” or “The CIO will never take that back to the business.” Returning to risk reduction, by automating courses of action or standard operating procedures and reducing them to machine understandable logic, we now have the full set of data manipulation tools that allows us to properly, track, control and secure the processes based on well established principles thereby lowering our risks. Next I will look at missing information in a little more depth.