SP2 patch peeves

One would think that SP2 Patch 1 for Identity Center and SP2 Patches 1 & 2 for Virtual Directory Server (released in early July) would be cumulative seeing that it’s about the same size as the original SP2 IdC install; sadly, this is not the case. Today, if you were planning to set up an up to date IdC installation you would have to follow these steps:

  1. Install SP2 (Identity center and Virtual Directory Server)
  2. Install SP2 Patch 1 – For your database (IDM70IC02P_1-20003100)
  3. Install SP2 Patch 2 – For Identity Center (IDM70IC02P_2-20003100)
  4. Install SP2 Patch 1 – For Virtual Directory Server (IDM70VDS01P_1-20003260)
  5. Install SP2 Patch 2 – For Virtual Directory Server (IDM70VDS02P_1-20003260)

Setting up the installation using the above mentioned steps leaves no room for mistakes. If you alter this process you might be left with a system that seems perfect on the outside but might not function as well on the inside.

I really wish they would release cumulatively patched versions of the product that would offer the option to either upgrade an existing installation or set up a fresh one. It would drastically simplify the install process. I think everyone would prefer it compared to a bunch of dependent bloated patches that can’t be used without the original SP2 install.

Connecting HCM to Virtual Directory Server

In working with Virtual Directory Server with the latest patches and HCM this past week, I ran into a fairly annoying problem.  In order to recieve data from SAP HCM you run a simple wizard, select the template and then save all your choices as an xml file.  Loading this file defines your server.  What I noticed with the server is once it is in place if you make any changes with the GUI such as changing a username or password, adding a feature and removing it, is that it breaks the configuration everytime so either I am not doing something correctly or there is a big bug in the GUI.  Making the same changes directly in the xml file with notepad does not break the configuration.  When I get more time I will dig a little deeper.

Identity Management & SaaS Redux

Matt previously made some good points on his personal blog, now I stumble upon  Dare Obasanjo who makes some fundamental development and economic points about SaaS.  It’s well worth reading the whole post.  Take the risks he highlights, couple that with IAM and unless you are dealing with a well established company with access to capital who loves spending money looking for security bugs, you have a risk that will override the costs savings (if they even exist at this stage) for identity management.  I could be completely wrong, however, security risks rarely seem to stop anyone from moving ahead with “saving money.”

Hat tip to James Robertson at Smalltalk Tidbits, Industry Rants  Blog.

The Crux of the Issue

I read an interesting post today by Jeff Boren, who has been very active in the recent Metadirectory debates (a good summary can be found here) amongst Matt Flynn, Ian Yip, James McGovern, Dave Kearns, Jackson Shaw, Clayton Donley and myself.

Jeff points out that:

And here is the real crux of the matter: most enterprises don’t really want an identity solution. What they want is a “spend less money, get everyone access to what they need when they need it, keep the bad guys out, keep us out of the headlines, and the CEO would  really, really, like not to go to jail” solution.

While a touch cynical I think Jeff’s post does point to a central issue in IAM.  Without a long range plan, that’s all you’re going to get out of an implementation and I don’t think that’s going to work in the long term under any circumstances.

Proper IAM impelmentations do not and cannot occur by accident.   Somewhere along the line, the C-level is going to wonder why so much equipment and resources are being used in maintaining and tracking disparate IAM systems. Strategic planning is essential from the moment the LDAP Schema is laid, through HCM incorporation, to provisioning, reconciliation, GRC and federated relationships to ensure you’re getting more than tactical stopgaps.

Data Knowledge and Risk Part II

When we are building a set of attributes whose totality indicates a person, thing or state upon from which we can take action, we need to make sure that the data is both accurate and complete.  Unfortunately, rarely do you get either.  Even the most diligent will make data entry errors or find the information unavailable at the time of entry.  This problem would be managable provided there were clear rules for handling missing
information, however, as a practitioner, you rarely run into such pleasant territory.  A null can mean anything, for example, the data was deleted, the data is missing, the data is unknown, it does not apply, the data was not entered.  By definition a null set is an empty set , so if you stare long enough you can hallucinate whatever you want into the empty space.  Given the foregoing, when we start to do joins across disparate repositories, and aggregate data, it can become impossible to make logical inferences. Ambiguity in data may be tolerable for humans because they muddle their way through, but for computers in their binary universe it’s a mess.  The null has an adverse affect on data integrity so key attributes are frequently constrained in SQL databases so they cannot be null.  Any store of data is a store of information about the world.  It is the systems knowledge of the world.  The more uncertain that knowledge, the more inaccurate the data, the bigger impact on productivity and decision making.  That impact increases risk and can do so significantly and cumulatively.  It creates a problem for basic induction i.e., instance confirmation.  This is not a new or an original observation.  How long have companies been working on master data management?  As mentioned above IAM initiatives force changes, critical data gets verified, corrected or filled in, risk reduction policies enforced and that has tangential benefits to the company, benefits easily missed.

Data, Knowledge and Risk Part I

One of the many ways that a centralized IAM initiative lowers your risk is by forcing many different departments into reducing undocumented courses of action into machine interpretable decisions.  For example, prior to the IAM system, an action may have been to get a phone call and take an action, or perhaps “go ask x”.  Knowledge which resides outside the system, that the system requires must be logically organized and input into the system.  Sometimes you find a situation where the logic used cannot be easily reduced to a truth test because the informaton required is either missing, unknown or inaccurate.  When this happens, decisions need to be made whether or not it is useful to fix or fill in the information.  This is where things get interesting, where IT needs meet political realities and you hear customer’s say, “Let’s let that one go” or “The CIO will never take that back to the business.” Returning to risk reduction, by automating courses of action or standard operating procedures and reducing them to machine understandable logic, we now have the full set of data manipulation tools that allows us to properly, track, control and secure the processes based on well established principles thereby lowering our risks. Next I will look at missing information in a little more depth.


In NW IdM SP2 there is a little notice that pops up telling you that the windows runtime engine is deprecated and not to use it unless you have no choice, viz., you must run a function specific to vbscript.  I noticed today when I loaded the “Reconcile Roles and Privileges” job from the wizard (which must be run when you alter roles in privileges) that the default engine is Java but the script was written using vbscript syntax.  Since script relies on an internal function of the DSE, your choices are obvious switch the engine back to windows or rewrite the script in JavaScript.  When doing a new install, you may want to pay attention to that mismatch.  I don’t know how common or frequent it may be.  But an inspection of the runtime engine, the script language and the scripts icon will let you know qiuckly.

ASUG Webinar on NW IdM

Frank Buchholz and Torgier Pedersen gave a presentation today on GRC and CUA.  Three statements interested me.  Frank said, “Step by step we’ll move IdM to the AS Java. Starting with IdM 7.1 the user interface will be generated by AS Java. ”  This makes it a little clearer when that is going to happen.  CUA wil be completely gone over the next seven years. Se below for the other point.

Architectural Changes and NW IdM

If a large software vendor makes fundemental changes to an application, it can negatively impact one’s provisioning.  Depending on the solution  used  one may have to wait for the vendor to support it natively or write their own code.  One of the benefits of using Netweaver Identity Manager is its flexibility.  It’s really an IAM took kit.  The down side of this is having to do more configuration.  The upside can be seen in the following illustration.  When Exchange 2007 was released the RUS process was history.  It was necessary to use the console or cmdlets to create mail boxes.  Those that had already migrated to Exchange 2007 last year were forced to roll their own Rube Goldberg device while wating for their IAM tool to natively support E2K7.  In NW IdM it was an easy process since you can natively execute a shell command or script.  This is one of the features I really like about the product, it allows enterprises to adapt quickly to unplanned changes.

Update 2008/07/10: Torgeir said on a webinar today that SAP will be providing a native framework/connector for E2K7.