RackSpace & Cloud Hosting

While researching hosting providers the results returned an Article 8 Reasons I hate Rackspace with the Fiery Passion of a Thousand Suns:

This article confirmed my suspicions about RackSpace based on other things I had heard.  The funny part is that Google Ad Services dropped in a banner Ad for Rackspace on an article that is rant against the company. I wonder how many click through conversions they get?

Measuring Risk Objectively?

In order to manage the complexity of life and the accompanying uncertainties, we build models.  Models by their very nature are reductions, that is, we throw out a certain amount of information.  A historian writing a history of Frankfurt, Germany is not going to concern himself with spots on the floor of the Rathaus in 1888 (unless he is a post-modern reductionist).

Risk is itself an abstraction, it is certainly not real.  Being the victim of a  specific risk, however, is real enough.  A more interesting topic is whether or not risk is objective or subjective.  How we measure matters.  It may impress to show on a slide that the mail gateway anti-virus blocked ten million attempts in the last year, but it matters little when the consequences of a single failure can end the business.

The U.S. legal scholar Cass Sunstein, who coined the term “libertarian paternalism” has commented on how small risks can become distorted in the mind of the public and amplified to the point (normally via mass media) that they influence public policy.  He uses the terms “availability cascade” (from the availability bias) and “probability neglect” to describe the basis for the entire process. The exact same thing happens in any organization where one bad experience leads to ridiculous changes in policy.  In the US think Love Canal or Times Beach.

So when we model a certain risk, it is often driven by emotion or prejudice and key elements are included/excluded.  It may take years to identify the errors.  I could be wrong but I do not think that risk can be measured objectively even with panels of experts since they are subject to the same problems as the lumpenproletariat they feel superior to, bias, group-think, emotional amplification, poor statistical reasoning, priors etc. Because of this, I agree with Paul Slovic, risk is subjective.

SAML Sets Sail

According to Dave Kearn’s post (H/T, Matt Pollicove & Lance Peterman) it makes little sense to continue developing for SAML, that is, it is headed down the legacy path.

… Craig stood up at the podium and announced to the world: “SAML is dead.”

This was off the chart because, well, SAML (Security Assertion Markup Language) is at the heart of most of Ping Identity’s products. And Ping Identity was our host. Predictably, Ping employee tweets immediately sought to reassure their followers that SAML was alive and well. What they neglected to take into account, though, was context.

Context is important for Identity Services as I’ve said over and over again for at least 10 years (see “Identity and privacy: it’s all about context”). But context is also important for understanding the spoken word.

While Mr. Kearns has been saying context is important for ten years, the rest of educated civilization has known about it since at least Aristotle and formally since the medieval period and ignoring it.  When people respond emotionally to a claim “SAML is dead.”  It’s because the claim is having its intended effect.  Context-less shock value remarks are designed to excite.  Sound bites become tools of mass media perception management.  Opponents are taken out of context intentionally; when strong emotions kick-in, we stop reasoning.   There is always someone declaring one thing or another dead that is not.  Nietzsche declared God dead which caused a lot of furor.

Along those lines, Kearns notes the following in his article.

Most of the other analysts agreed with Craig (as did many of the attendees, especially those who were in his audience.) Some pointed out that other, seemingly dead, authentication protocols (such as IBM’s RACF and Top-Secret) were still used by many as were programs written in COBOL.

But far from being an argument against Burton’s pronouncement these are actually supporting evidence for his claim that SAML is dead. Because RACF and COBOL are also “dead,” at least in the sense Craig meant.

Good point and it pays to remember that technology does not disappear from the earth; no technology is ever really dead.  Can you still purchase, Windows for Workgroups, a typewriter, a stone ax or tan a hide with brains?  The question is rhetorical.  Pick up a Sears & Roebuck catalog from the late eighteen hundreds, you will find every item listed still available from someone.

So when people say a technology is dead they really mean it has moved closer to obsolescence.  All technologies, whether original, re-invented, rediscovered or misused from ignorance (XML for data management) will follow the S curve evolutionary path.  This has been generalized from observation across many complex systems.

Finally, it doesn’t surprise me that SAML is on the way out, in fact, I am just surprised it was used at all.  Anything we wish to represent in a computerized database requires that we build a conceptual model discarding items as we go.  Sometimes we start with simple models, adding layers of complexity as we go, other times we start with really complex models, adding confusion as we go but in both cases conceptual modeling is subjective, it is in the “eye of the beholder” as the cliche goes. And to do this process well,  it is essential that we begin with a good definition of terms  to remove ambiguity so that our model is internally consistent and used consistently.  Whenever the meaning of terms changes in a way that is not a simple extension, our “model is dead” so to speak and we are really starting a new conceptual model.  This can happen when the process/system outside we are modeling changes in an observable way, our understanding of the process changes, or a large vendor needs to sell a new solution into which they poured a lot of money and it doesn’t fit into the old model.  When this happens, industry standards groups are formed or even better the government is co-opted into making it law so it can resist innovation and all efforts to improve.

Once the concept model is built we need to capture as much meaning as possible in the computer and structure that data so we can manipulate it with constraints acting as meta-data.  Typically we do this with a database.  Once the data is stored we will need to periodically exchange it which means that we only need to know what it is we are passing (the data) and it what it means (the conceptual model).  It does not follow that one must use xml to accomplish the foregoing and since xml is hierarchical we have to parse a lot of paths to get to the data, that is not particularly easy for large specifications.  Therefore, it comes as no surprise that SAML is on the way out.

IdM Business Case Financial Forecasting

I think business cases are important but it is not necessary or even beneficial to be overly precise and granular in the quantitative sections except as form of organizational signaling (look how thorough I am).  If you compared the pro forma’s I did as a 28 year old with the ones I do today you would have seen a lot more rigor back then.  That rigor was a complete waste of time.

Last year I did a strategic engagement for a large multinational.  The whole experience was shall we say? Less than rewarding.  When things go wrong you must ultimately blame yourself for one simple reason, if you are responsible for all the good that happens to you then you are responsible for all the bad.

Part of this engagement was to develop a business case which I never quite got to, due to all the time wasted in hand wringing over slides on a PowerPoint (which seemed to be the required mode of management communication within the company) and waiting for feedback.  Regardless, I was handed a template for a business case (which looked like it had been developed for manufacturing not IT) and it looked like it was written by a finance nerd in an MBA program.  I did find out who wrote it for them (a large multi-national consulting firm) and the firm was apparently being paid by the bit.  It had an extraordinary amount of granularity and to complete it would require a team to collect all the inputs.  In short it is a complete waste of time.

Why would I say that?  Because bottom up detail does not equal forecast accuracy.  When you develop a financial model based solely on internal estimates it’s going to be way off.  The only way to get any degree of accuracy would be to have your CIO call three other CIOs at companies whose size and complexity is a rough match and ask them how long it took them to deploy and how much did it cost.  That beats the vast majority of internal estimates.  The hardest hurdle to get over is the IT’s confidence in their ability to do a better forecast.  Will they?  Maybe.  Statistics indicate they will not.

If the outside option is not available then use your company’s own experiences in IT projects of equal complexity.  You will save a lot of time and effort and perhaps improve your accuracy.