Importing the SAP Provisioning Framework

One of the main reasons that one goes with SAP NetWeaver Identity Management is for the integration with other SAP modules.  The main way that this is done is through something called the SAP Provisioning Framework which comes bundled with the product.

There are a couple of challenges to accessing the framework.  The first is how to load it.  By default, the Framework exsists as an import file which needs to be located. By default the import file exists in “C:Program FilesSAPIdMIdentity CenterTemplatesIdentity CenterSAP Provisioning frameworkSAP Provisioning Framework_Folder.mcc”

Now that we know where the Framework is located, we can load the import/export from the NW IDM MMC console. However when loading the Framework you might get the following Error Message: “Could not import global script ’67/custom_generateHRID’ to identity center” I could not find any setting in import/export that allowed me to prevent the script from being processed.

After some research and poking around, I remembered that the SAP Provisioning Framework_Folder.mcc file is actually XML code.  So I went through and searched on the phrase “custom_generateHRID” and found exactly one reference (Highlight added):

         <SCRIPTLASTCHANGE>2007-10-04 12:52:52.7</SCRIPTLASTCHANGE>
So being the intrepid guy that I am, I deleted the highlighted line and tried the import again.  It worked like a charm!  Not sure what to take away from this, but I’m glad I solved the problem.  Has anyone else seen this problem and solved it a different way?

Late to the Party and Nothing New

There is an old cliche that goes, “The mediocre are always at their best” and that certainly applies to Microsoft’s announcement surounding ILM “2” now officially Forefront Identity Manager 2010  (FIM).  Brad Turner thought it should be Forefront Identity Lifecycle Manager.  I submit Forefront Lifecycle Identity Management System Year 2010 would have been even better.   Somehow the extra words and the clunky sound of saying it out loud is an apt aural simile for the technologies they crashed together to create it.  The product is heralded on the web site with these words “Identity Management is about to get a lot easier(sic)”.   The clear advantage of this product is that it comes from Microsoft whatever you may wish to infer.  They are trumpeting the self service capabilities which I believe has diminishing returns; there is point at which self service is a time sump for busy people.  The creation of workflows appears to be greatly improved and easier to do.  There is little improvement in the back end  and underneath it all is Microsoft’s  metadirectory.   In the end you have to wait until 2010 to be underwhelmed.   For many it will be good enough.

In the same vein is an article at SC Magazine by one Mark Wilcox,  principal product manager at Oracle.  Mr. Wilcox re-discovers, independently no doubt,  the advantages of  using virtual directory servers for mergers and acquisitions.  Not to be unduly hard on Mr. Wilcox but that was well established  by at least 2004 and you could argue earlier.  At least they can prove the usefulness of their own product as they attempt to swallow the Sun.  My favourite line from the article is this sentence in the opening paragraph, “M&A delivers its promise through the successful integration of two organizations and their business processes.”  That strikes me as an odd statement because you can successfully integrate two companies  and never turn a profit.  Only returning money to the shareholders fulfills the promise.

Process Re-engineering and Identity Management

Ash of  Ash’s Identity Management Rantings has a post on process re-engineering principles and he makes this statement, “It’s important to focus primary re-engineering efforts on areas that can positively impact identity data.”  Emphasis is in the original post.  I will agree with that statement with a qualification.  If by impacting identity data we lower transaction costs. reduce risk or increase corporate productivity then it makes sense.  If I make something spit out a result twice as fast as yesterday,  the process  may be only generating excess capacity  or getting to a wait state faster.    The change makes sense if at the same time I can eliminate the position of a high paid administrator re-purpose them or replace them with a lower cost person.

Later in the comments, Ash states this,

I think there is a misconception that optimization=automation, which shouldn’t be the primary goal of optimization in an identity project. The primary objective in my opinion is data integrity, while automation is a nice side-effect.

I am going to assume he  is using data integrity in this sentence to mean the data is consistent throughout, I can get to it when I need to and that the data is correct.   Data integrity can impact both productivity and risk reduction but not reduce transaction costs unless our current state of  data is a complete useless mess.  Typically clean up is done with an automated tool of some kind once we have a standard to compare against. It seems to me that automation can impact all three with an increase in data integrity also.  Therefore automation is not a side effect but a critical component.  I am assuming that we are automating an efficient process and not some Rube Goldberg machine.  If I have misunderstood anything, I invite Ash to correct me and clarify.

IDM in need of a fundamental change

Currently within the marketplace, apart from SAP’s Netweaver Identity Management, there are IDM offerings from many of the other big names, namely:

and a couple of smaller players too:

just to name a few; but in this day and age, with software in all other spaces evolving so rapidly and becoming ever more easier and intuitive to use, IDM suites don’t seem to be improving in this area. The majority of all IDM products seem to have very dated and tiresome interfaces.  A truly revolutionary IDM product would offer the following:

  • an easy to use, intuitive interface
  • a powerful set of development tools
  • functionality encompassed within modules to make development, administration, customization and upgrades much easier
  • a large toolbox of commonly used functionality modules to enable rapid development and code re-use
  • strong set of compliance tools
  • transparency when it comes to system monitoring and debugging
  • role and identity based access management
  • tight and easy integration with external systems
  • open source and open standards; enabling community development which fosters rapid creativity

IDM implementations would be far more useful and easier to implement if they addressed the above features.

Identity Theft and Enterprise Identity Management

When I tell people that I work in the Identity Management field, the first comment is usually something like, “Wow, identity theft, cool stuff.  What should I be doing to protect myself?” Sometimes I’ll try to explain about user life cycle provisioning, role management, meta directories, compliance, audit and the other technologies and concepts that I consider for Enterprise customers, but for some reason that’s not terribly interesting to most people.

However, I recently started thinking about it. What are we really doing here? At the very core, it’s all about defining and ensuring the enterprise user’s identity within the organization. So in essence, we are concerned with the idea of Identity theft when you think about it.  Making sure the right individuals are logging in with the right credentials. Then making sure that these individuals do not have more access then they are entitled to.

So maybe now my new answer will be: “Yeah, kind of, except that I deal with preventing identity theft at the corporate level, reducing fraud and eliminating risk”  Then I can go on and explain about user life cycle provisioning, role management…

MSKEYs and MSKEYVALUEs, Views and Tables

It does not take long before the NW IDM Administrator starts peeking under the hood to see what sits in the database. This is not a bad thing since there is a lot of good information that one can gain here, especially in the way of analytics and audit data. Understanding the database structure is also important in terms of developing and troubleshooting access rules and other Identity Store SELECT criteria. This posting will talk about a couple of key characteristics of the NW IDM Identity Store database. One of the great things about NW IDM is that there are virtually no differences between the Oracle and MS SQL database schemas so this article works for both environments.

This article will be one of several in a series covering the NW IDM Database structure.  We would be very interested in hearing about other areas of the back end that should be discussed.

When looking at these tables one of the first things that people notice is that there is seemingly no direct reference to the MSKEYVALUE attribute and that there is a value for something called an MSKEY. Consider this extract from a sample Identity Store regarding the user Johnny Scooter:

mskey attr_id aValue SearchValue
187 2 JScooter JSCOOTER
187 4 Johnny Scooter JOHNNY SCOOTER
187 36 Johnny JOHNNY
187 37 732 123 4567 732 123 4567
187 39 Scooter SCOOTER
187 40 Information Systems INFORMATION SYSTEMS
187 41 732 765 4321 732 765 4321
187 42 Systems Engineer SYSTEMS ENGINEER
187 49 Johnny.Scooter@nwidm.local JOHNNY.SCOOTER@NWIDM.LOCAL

This is a snapshot from the MXI_VALUES table listing four columns, mskey, attr_id, aValue and SearchValue. (For a complete listing of the NW IDM Identity store schema take a look at this document.) The aValue and SearchValue show the attributes’ values in the form that they were entered into the system and a globally consistent format for easy searching respectively. So going back to our question from above, where’s the MSKEYVALUE. If we look in MXI_ATTRIBUTES we’ll find:

Attr_ID AttrName
49 AD_DN

So this answers the first question, how do you match up those Attr_ID’s anyway as there is a direct link between the two tables via the attr_id column. The second and more important question is that a value of “2” for Attr_ID corresponds to MSKEYVALUE. So what is going on here, anyway? To begin; let us look at an excerpt from the view MXIV_SENTRIES about our friend Johnny Scooter, which is a separate, friendlier representation of MXI_VALUES since there’s more name information rather than value references.

mskey AttrName aValue SearchValue
187 MX_PHONE_ADDITIONAL 732 123 4567 732 123 4567
187 MX_PHONE_PRIMARY 732 765 4321 732 765 4321

Basically, there are two identifiers that are in use by NW IDM, the first is the more publicly known MSKEYVALUE. This unique identifier is exposed via the Workflow User Interface and is easily seen in MXIV_SENTRIES. However when we look at MXI_VALUES we see that the references to MSKEYVALUE are harder to root out as we need to go through the Attributes table. We also see a reference to the MSKEY field, which appears to be an identifier in both the MXIV_SENTRIES view and in the MXI_VALUES table. This tells us that NW IDM uses the MSKEY field as an internal unique Identifier. Why is this? I would say that the most basic reason is that the MSKEYVALUE is subject to change and would thus be ineligible for use as a link between tables. Even though we have only spoken about user identities, all entries in the Identity store (users, roles, privileges, entry types) have both MSKEYs and MSKEYVALUES. Therefore, if the MSKEYVALUE were the only identifier it would be a poor choice for linking between tables as a foreign key link as referential integrity issues would soon arise.

So to wrap up, what can we conclude from all of this?

  • MSKEYVALUE is the unique identifier used by NW IDM at the workflow layer and is changeable
  • MSKEY is the internal unique identifier and is not changeable. It is also used as a foreign key link between various NW IDM tables
  • MXI_VALUES is an NW IDM table holding information about all of the objects in NW IDM
  • MXIV_SENTRIES is a NW IDM View that holds a “friendlier” representation of MXI_VALUES

A Need for Standards

I came across an interesting eWeek Blog entry.  In it, Michael Vizard makes some interesting points about lack of standards in Identity Management. He makes some valid points in that there is no real standard for creating physical means proving identity. While a comprehensive framework makes sense for physical provisioning and Access Management, I have some concerns.  If we have a published framework for creating Access Management tokens, that makes it that much easier to compromise those standards.

Mitigating this concern is the fact that there are several ways to ensure the validity of the issued token.  The FIPS standard cited in the blog entry makes heavy use of PKI technologies.  I would assume other hashed attributes would be a part of the token as well.

My other primary concern is that the examples that Vizard cites are both governmental in nature.  It would make much more sense to me if there was a public sector standard cited as well.

It will be interesting to see how this develops in both the public and private sectors.