SAML Sets Sail

According to Dave Kearn’s post (H/T, Matt Pollicove & Lance Peterman) it makes little sense to continue developing for SAML, that is, it is headed down the legacy path.

… Craig stood up at the podium and announced to the world: “SAML is dead.”

This was off the chart because, well, SAML (Security Assertion Markup Language) is at the heart of most of Ping Identity’s products. And Ping Identity was our host. Predictably, Ping employee tweets immediately sought to reassure their followers that SAML was alive and well. What they neglected to take into account, though, was context.

Context is important for Identity Services as I’ve said over and over again for at least 10 years (see “Identity and privacy: it’s all about context”). But context is also important for understanding the spoken word.

While Mr. Kearns has been saying context is important for ten years, the rest of educated civilization has known about it since at least Aristotle and formally since the medieval period and ignoring it.  When people respond emotionally to a claim “SAML is dead.”  It’s because the claim is having its intended effect.  Context-less shock value remarks are designed to excite.  Sound bites become tools of mass media perception management.  Opponents are taken out of context intentionally; when strong emotions kick-in, we stop reasoning.   There is always someone declaring one thing or another dead that is not.  Nietzsche declared God dead which caused a lot of furor.

Along those lines, Kearns notes the following in his article.

Most of the other analysts agreed with Craig (as did many of the attendees, especially those who were in his audience.) Some pointed out that other, seemingly dead, authentication protocols (such as IBM’s RACF and Top-Secret) were still used by many as were programs written in COBOL.

But far from being an argument against Burton’s pronouncement these are actually supporting evidence for his claim that SAML is dead. Because RACF and COBOL are also “dead,” at least in the sense Craig meant.

Good point and it pays to remember that technology does not disappear from the earth; no technology is ever really dead.  Can you still purchase, Windows for Workgroups, a typewriter, a stone ax or tan a hide with brains?  The question is rhetorical.  Pick up a Sears & Roebuck catalog from the late eighteen hundreds, you will find every item listed still available from someone.

So when people say a technology is dead they really mean it has moved closer to obsolescence.  All technologies, whether original, re-invented, rediscovered or misused from ignorance (XML for data management) will follow the S curve evolutionary path.  This has been generalized from observation across many complex systems.

Finally, it doesn’t surprise me that SAML is on the way out, in fact, I am just surprised it was used at all.  Anything we wish to represent in a computerized database requires that we build a conceptual model discarding items as we go.  Sometimes we start with simple models, adding layers of complexity as we go, other times we start with really complex models, adding confusion as we go but in both cases conceptual modeling is subjective, it is in the “eye of the beholder” as the cliche goes. And to do this process well,  it is essential that we begin with a good definition of terms  to remove ambiguity so that our model is internally consistent and used consistently.  Whenever the meaning of terms changes in a way that is not a simple extension, our “model is dead” so to speak and we are really starting a new conceptual model.  This can happen when the process/system outside we are modeling changes in an observable way, our understanding of the process changes, or a large vendor needs to sell a new solution into which they poured a lot of money and it doesn’t fit into the old model.  When this happens, industry standards groups are formed or even better the government is co-opted into making it law so it can resist innovation and all efforts to improve.

Once the concept model is built we need to capture as much meaning as possible in the computer and structure that data so we can manipulate it with constraints acting as meta-data.  Typically we do this with a database.  Once the data is stored we will need to periodically exchange it which means that we only need to know what it is we are passing (the data) and it what it means (the conceptual model).  It does not follow that one must use xml to accomplish the foregoing and since xml is hierarchical we have to parse a lot of paths to get to the data, that is not particularly easy for large specifications.  Therefore, it comes as no surprise that SAML is on the way out.

IdM Business Case Financial Forecasting

I think business cases are important but it is not necessary or even beneficial to be overly precise and granular in the quantitative sections except as form of organizational signaling (look how thorough I am).  If you compared the pro forma’s I did as a 28 year old with the ones I do today you would have seen a lot more rigor back then.  That rigor was a complete waste of time.

Last year I did a strategic engagement for a large multinational.  The whole experience was shall we say? Less than rewarding.  When things go wrong you must ultimately blame yourself for one simple reason, if you are responsible for all the good that happens to you then you are responsible for all the bad.

Part of this engagement was to develop a business case which I never quite got to, due to all the time wasted in hand wringing over slides on a PowerPoint (which seemed to be the required mode of management communication within the company) and waiting for feedback.  Regardless, I was handed a template for a business case (which looked like it had been developed for manufacturing not IT) and it looked like it was written by a finance nerd in an MBA program.  I did find out who wrote it for them (a large multi-national consulting firm) and the firm was apparently being paid by the bit.  It had an extraordinary amount of granularity and to complete it would require a team to collect all the inputs.  In short it is a complete waste of time.

Why would I say that?  Because bottom up detail does not equal forecast accuracy.  When you develop a financial model based solely on internal estimates it’s going to be way off.  The only way to get any degree of accuracy would be to have your CIO call three other CIOs at companies whose size and complexity is a rough match and ask them how long it took them to deploy and how much did it cost.  That beats the vast majority of internal estimates.  The hardest hurdle to get over is the IT’s confidence in their ability to do a better forecast.  Will they?  Maybe.  Statistics indicate they will not.

If the outside option is not available then use your company’s own experiences in IT projects of equal complexity.  You will save a lot of time and effort and perhaps improve your accuracy.

Blind to Collapse

There is a very good article on the fall of the Roman empire by Ugo Bardi (h/t Vox Dei).  He applies system dynamics to explain the collapse.

Let’s start from the beginning…with the people who were contemporary to the collapse, the Romans themselves. Did they understand what was happening to them? This is a very important point: if a society, intended as its government, can understand that collapse is coming, can they do something to avoid it? It is relevant to our own situation, today.

In business people use the myth of the boiled frog to explain our inability to see and adapt to the deleterious effects of change.  And while there are those who unflinchingly pursue the truth, they may be only recognized as such in the post collapse analysis.  Decline is inevitable; the venal, the power hungry will eventually seize control (it’s for your own good or the children), the virtue of a culture will be replaced by hedonistic calculus,  technological sophistication reaches its zenith while education its nadir, and everyone tries to saw off a limb while the tree falls.  “I got mine.”  Yes, you did.

Regardless of  how much knowledge we accumulate, no matter how many collapsed civilizations, technological failures or business cases we study, there will always be a new generation who, as Russell Kirk described, “are like flies of the summer” caring little what went before them and nothing for what comes after, who are curious only and I mean that in the medieval sense.   The more complex a system becomes the higher cost to maintain the status quo.  Eventually complexity reaches the point that the problems become insurmountable and from what I have seen the more centralized the decision making authority, the faster the demise.

Kubla Khan is Always With Us

Samuel Taylor Coleridge penned his famous poem Kubla Khan, a Vision of a Dream under the influence of a few grains of opium taken for dysentery. One can only wonder what the fifth great Khan himself was under when he ordered the building of 4,000 ships in a year for the second invasion of Japan. Perhaps he was only drunk on power. Nevertheless, it too was a catastrophic failure in which nearly everyone perished in a typhoon.

The Japanese myth had it that it was magical wind that did in Kubla Khan’s fleet. Modern archaeology tells a slightly different story. The grandson of Genghis Khan’s order led to shoddy craftsmanship, and using river vessels with flat bottoms to meet the artificial deadline. When placed under the duress of a typhoon, a statistical outlier, the vessels lacked the required design and therefore resilience to withstand the storm.

Again and again people plan based on best case scenarios ignoring the outliers whose impact is catastrophic. Completion dates are imposed based on the perception of what timeline is acceptable to the boss, or blind bottom up task by task time estimates. This carries on today, whether it is ambitious government, ambitious business, or ambitious IAM . We hear repeatedly stories of hard-nosed leaders saying, “I told them I wanted it yesterday and they made it happen.” While these stories appear regularly in the press, the stories we don’t hear (unless the magnitude is large) are the numerous small failures where “I wanted it yesterday” is a loser. I assure you these out number the success stories but there is no one out their bragging about that, “Hey everyone, boy did we lose money this week” or “I would like to congratulate the team for missing every deadline I imposed on them.”

It is no different then the gambler bragging about his winnings and strangely silent on his losses. As Nassim Taleb has said, “We don’t learn that we don’t learn.”

Kindle Fire Will Not Buy Us Much

There is a post over at Volokh Conspiracy where the author Stewart Baker believes that the Kindle Fire users will ultimately be more secure because Amazon is acting as a big http proxy and by running everything through Amazon’s cloud it will reduce the risk of end point compromise.  Instead of relying on your own ability to protect your device, Amazon will do it for you assuming that they are more knowledgeable than you in information security related matters. I am not nearly as excited for the following reasons:

1.  Without physical security there is no security.  If one has physical access to a device it is quite possible to subvert for nefarious purposes.  Amazon cannot control that and if done well they will not be able to detect it.  Ask Apple about all those jail broken iPhones.  Additionally, many exploits rely on social engineering.  All the hardware in the world cannot stop you from making an error.

2.  A big filtering proxy in the cloud is just another filtering proxy.  At some point its pattern recognition systems will have false positives enough times that users will work overtime to get around them; that, or Amazon will have to loosen up the filtering.  One commenter pointed out correctly that AOL tried this before.

3. Amazon Fire will have its own broswer, which will have its own browser flaws and security problems.  That is inescapable.  Roll your own browser and you have to do your own code audits too, every change introduces new risks and possible regression errors.

4.  Risk will be more non-linear.  We may have fewer security problems but the impact of one will be far more severe.  A heavily defended system is always complex and when a complex systems goes down it is frequently catastrophic in consequence.

5.  Security is expensive.  The more complex the system the higher the cost to defend it.  Security is frequently one of the first areas relaxed when costs are creeping up.  This is normally followed by a security failure of some kind, the firing of the security personnel, an increase in security spending on technological solutions and a return to the beginning.  Think of this as the security personnel scapegoating life cycle.  Amazon will not be immune.

In the end, the entire Amazon Kindle Fire ecosystem is just another system requiring defending with the same kinds of problems as other systems.  I do not share the author’s optimism.

IT Architectural Principles?

Dictionary definition of principle

If you survey the information on the web concerning IT architectural principles you mostly find descriptions like this.  This is pretty consistent what others have published whether IBM, Gartner, Forrester et. al.

After some explanations, they go on to list a set of rules that should apply to the deployment of IT.  The perspective here is really policy based.  As a policy they are simply constraints on what is permissible and/or a listing of “best practices.”  I believe this approach can be subsumed by a broader category, one with a results focus.  IT’s sole purpose as is any tool is to act as a productivity multiplier, to make the organization more efficient.  The role of the architect is to make decisions which once made are not so easily reversed. This semi-permanent aspect of decision making is why architects should be experienced practitioners that are well versed in computer science fundamentals.

Drawing on the work of Darrell Mann and others, IT Architectural Principles with a results focus can be split into two categories, analysis and design, the first no one enjoys doing the second everyone does.   See diagram below.

a diagram of architectural principles
In the category of analysis, the principles – as defined by the opengroup – become just another series of constraints which influence our design (sometimes to the detriment of the organization when a broader context is not considered).  With the exception of the last item, the listing is straight forward in meaning. What is meant by sticking points are those areas which are sometimes called engineering tradeoffs.

Regarding the design principles, these are derived from a millennia of trail and error with modularity allowing the architect to encapsulate complexity and increase his solution choices, flexibility allowing us to reverse decisions, adapt to change and resilience to withstand the shock of disastrous events.

I believe following these principles versus thinking only about organizational rules, policies and constraints permit us to produce more innovative designs, increasing efficiency in the organization and fulfilling the proper role of architecture.

XAMCL? No Thanks

XAMCL? No Thanks

That there are no new problems seems widely understood (save for the child and naïf) but it seems rarely do people bother to understand the historical solutions to these problems, that is to say, we focus almost exclusively on the facts of the problem without ever bothering to look at the principles or rules that may already be understood.  This kind of reflective thinking, along with analysis of principles derived from the experience of our predecessors whether extant or having suffered debitum naturae, extracts a large cognitive cost.  “Math is hard,” the philosopher Barbie once observed, as is all real analysis.

What we frequently do, because it extracts a low cognitive cost, is simply to allow things to move in the direction dictated by the promoter with the large megaphone, to prattle on mindlessly like a child, to ignore what has gone before, to ignore what theory there is and prefer the clustering of like minded people even if this is nothing more than a coterie of idiot enthusiasts.  It is easier to sit on the band wagon collecting money with all the other simpletons, than to go against the flow and think for yourself.

Nothing embodies this more than the widespread use of XML for things which it is poorly suited, especially data management.  In its early stages there were vigorous arguments against adopting it, but logic and reason are no match for fads backed by large corporations motivated by “innovation”, and quarterly results.

In proposing to use xml as the common “language” of security policy the authors of the specification write the following:

“XML is a natural choice as the basis for the common security-policy language, due to the ease with which its syntax and semantics can be extended to accommodate the unique requirements of this application, and the widespread support that it enjoys from all the main platform and tool vendors.”

This is specious reasoning if it can be called reasoning at all.  Can anyone show me a text based format that can’t be extended to accommodate the requirements of an application? In the second half of that sentence they note that xml has widespread “support.”    Socialism had widespread support among the intelligentsia,  but it doesn’t work well either.  To exchange data we only need to agree what to pass and what it means.  All real meaning exists in the hemispheres of the brain.  Since logic ignores context, the meaning is documented so we are not left to speculate.  If that view, that concept is missing we are stuck with speculation.  Anyone who has tried reading uncommented code or peered into a database without knowing the conceptual model, know this well.  Nearly all the early claims of xml’s benefits (especially about meaning and tags) have been abandoned and we are left with these two, everybody does it and I can make it do anything.

A while back there was a question posted on a Linked-In group titled “Is Role Based Access Control a dead end and Rule Based Access the future?” inevitably several said the answer to the problem is XAMCL. I don’t think so.  What drives the problems with role design versus using rules are really fundamental philosophical questions of categorization and classification (distinctly different concepts) and how we manage complexity.  To say the solution will be adapting yet another complex xml standard is laughable.  It really shows that one does not understand the fundamental nature of the problem. Maybe xml is the way to go but I doubt there was much reflective thinking before they started writing.  My best guess is that XAMCL will be as widely adapted as SPML and most likely will spawn efforts like this for the same reasons.

NetWeaver Identity Management 7.1 Implementation Challenges

Challenge 1:  Self Service is Not Intuitive for Unsophisticated Users
Companies deploying NetWeaver Identity Management will find that that the interface for self service for the least technical employees will require training.   Clicking a self service task to request a privilege or business role will result in a standard WebDynPro interface that will show them two search boxes. The one on the left will be for searching for what they want (Available) and the one on the right what they already have had assigned.    Experience has shown that this interface can cause confusion with unsophisticated users. Companies will want to make judicious use of access controls to limit what choices are presented to the self service client.  This requires that logic be established in advance based upon some set to which they are a member.  Additionally, companies will want to train employees in advance of deployment to reduce help desk calls.

Challenge 2:  Fragmented Documentation
The documentation for accomplishing end to end workflows is scattered across many different documents.  There are few scenario based use case “how-to” documents.  Companies deploying NetWeaver IdM 7.1 will want to permit sufficient time for their deployment team to work with the product in order to gain a full understanding, before undertaking a deployment.  Alternatively, companies can bring in outside experts to assist, and train personnel.

Challenge 3:  Limitations in the Staging Environment
NetWeaver IdM 7.1 uses an xml export file to move from development to Test and Test to Production.    The file is exported using a built in utility.  The file is created within the identity center by selecting export.  Many settings between environments are not exported for example, repositories, event agents, provisioning/deprovisioning tasks on privileges must be done manually.  There are some bugs, for example, complex linking between tasks are sometimes broken during import.   These limitations can be mitigated with manual adjustments but the process is lengthy.
Challenge 4:   Job Customization Frequently Requires Custom JavaScript
Under NetWeaver IdM 7.1 the imported “SAP Provisioning Framework” has greatly simplified system deployment, however, there are simple functions, for example, E-Mail notifications which still must be done entirely in JavaScript.  This also applies to any non-simple data modification.   This slows down deployment.  The alternatives are to custom development Java templates or wait for the product to mature.

Challenge 5:  Few Useful Reports Available in Default Installation

Most of the default reports available lack the simplicity of being able to easily show standard audit information like “who did what to whom and when”.  Although extensive audit information is stored the database, it is not always easy to extract the data without extensive SQL queries.  The documentation itself does not clearly explain the complex relationship between the data in the tables.  There are no shortcuts available , careful analysis of the underlying tables and proper query writing must be done.

 

NB: Since I am on a project and can’t go to Tech Ed watch Matt Pollicove’s blog for updates on whether these challenges are being addressed.