IdM Business Case Financial Forecasting

I think business cases are important but it is not necessary or even beneficial to be overly precise and granular in the quantitative sections except as form of organizational signaling (look how thorough I am).  If you compared the pro forma’s I did as a 28 year old with the ones I do today you would have seen a lot more rigor back then.  That rigor was a complete waste of time.

Last year I did a strategic engagement for a large multinational.  The whole experience was shall we say? Less than rewarding.  When things go wrong you must ultimately blame yourself for one simple reason, if you are responsible for all the good that happens to you then you are responsible for all the bad.

Part of this engagement was to develop a business case which I never quite got to, due to all the time wasted in hand wringing over slides on a PowerPoint (which seemed to be the required mode of management communication within the company) and waiting for feedback.  Regardless, I was handed a template for a business case (which looked like it had been developed for manufacturing not IT) and it looked like it was written by a finance nerd in an MBA program.  I did find out who wrote it for them (a large multi-national consulting firm) and the firm was apparently being paid by the bit.  It had an extraordinary amount of granularity and to complete it would require a team to collect all the inputs.  In short it is a complete waste of time.

Why would I say that?  Because bottom up detail does not equal forecast accuracy.  When you develop a financial model based solely on internal estimates it’s going to be way off.  The only way to get any degree of accuracy would be to have your CIO call three other CIOs at companies whose size and complexity is a rough match and ask them how long it took them to deploy and how much did it cost.  That beats the vast majority of internal estimates.  The hardest hurdle to get over is the IT’s confidence in their ability to do a better forecast.  Will they?  Maybe.  Statistics indicate they will not.

If the outside option is not available then use your company’s own experiences in IT projects of equal complexity.  You will save a lot of time and effort and perhaps improve your accuracy.

Building a Business Case for Identity & Access Management

When I worked for a large corporation I was frequently tasked with building a business case without a budget, that is, I wasn’t able to hire any consultants to assist me.  In some cases deadlines were relatively short so it was fairly difficult to get it completed.  When the Internet came around more than once I was saved by people willing share business cases they had developed.  Therefore I have uploaded a economic impact model that comprises two documents, an excel spreadsheet and word document that should cover the basic needs of a user.  I have other more sophisticated models besides this one (for example, a business process and knowledge management re-engineering model that compares the economics of the current state versus the future state) but for the majority this should suffice to help you get started.  If you find it useful just leave me a comment.

It can be downloaded here.

Update 20120801:  Fixed the link again

How Many Problems with Persistent Data Does a Unique Identifier Solve?

The answer is zero.  A unique identifier adds nothing to any logical problem we have with our data.  Let’s see why this is true.  I have two sets of data from different systems, which represent information or attributes about a real world user.  Those data elements are indistinguishable from each other.  Perhaps they are first name, last name, city and state.  They are identical as far as I can tell.  If I add a unique identifier do I know anything more about them?  I just know they are no longer identical and yet they may be in the real world the same person. By adding a unique identifier I may have made a distinction, which is false.  It’s impact will only be deleterious never beneficial.  The unique identifier becomes ornamental.  Metaphorically it is like placing a medallion around the neck of the famous twins and still not knowing if it’s Tweedledee or Tweedledum.  At least in this case I could re-name them to something like Dee and Notdee, which would be meaningful to an observer.  However, in the foregoing example, we are dealing already with a representation of an entity and it adds nothing. Now let’s add several more attributes, for example, title and department.  If I can now distinguish easily whether they are the same person or not I have accomplished my goal and I still have not added a unique identifier.  The smallest subset of elements that distinguishes one set from another is a suitable key if the data is in a database and I still haven’t added a unique identifier.  So then how are unique identifier’s useful?  They are useful within a context in which we are programmatically creating many closely similar but not identical objects whose existence is ephemeral.  When we are combining data from many different contexts, they solve nothing; they are just another attribute.

The other side of the article

It’s seldom that I publish more than one blog post on a single piece, but Mark Diodati’s article “Changing times for identity management ” (login required) spoke of two main themes that I felt needed to be discussed.  In an article on IdM Thoughtplace, I looked into some issues of what composes “New School” Idm.

In this piece, I’d like to comment on a couple of points that Mark makes that I particularly agree with.

First off, Mark mentions that thorough analysis and review of IdM offerings is essential.  The selection team/steering committee  needs to remember that no IdM product exists in a vacuum.  Testing against ERP, enterprise LDAP/AD and other key systems is essential, and involving a pilot group is key as well.  I’d go a step beyond what Mark specifies, by adding that your pilot group needs to be multi-disciplinary. Just IT or Help Desk folks won’t cut it here.  Make sure there’s some HR and ERP users along with other “typical” users in your organization.  You’ll need to do a little more hand holding and training earlier that you’d like, but you’ll get better responses and metrics in return.

I’m also in agreement that you should review all offerings and available features/upgrades from current infrastructure. That “buried treasure” could be the key to keeping your infrastructure secure and compliant. Also find every way possible to use and reuse your current infrastructure., it can pay off in the long run.

It’s a tough economy out there, but that does not mean that you should stop your review of  IdM improvements.  Use the current time for evaluation and planning.  Bring some vendors in for a PoC to make sure it fits into current infrastructure.  The best place to start looking is right in your server rooms and data centers.  Go to it!

Importing the SAP Provisioning Framework

One of the main reasons that one goes with SAP NetWeaver Identity Management is for the integration with other SAP modules.  The main way that this is done is through something called the SAP Provisioning Framework which comes bundled with the product.

There are a couple of challenges to accessing the framework.  The first is how to load it.  By default, the Framework exsists as an import file which needs to be located. By default the import file exists in “C:Program FilesSAPIdMIdentity CenterTemplatesIdentity CenterSAP Provisioning frameworkSAP Provisioning Framework_Folder.mcc”

Now that we know where the Framework is located, we can load the import/export from the NW IDM MMC console. However when loading the Framework you might get the following Error Message: “Could not import global script ’67/custom_generateHRID’ to identity center” I could not find any setting in import/export that allowed me to prevent the script from being processed.

After some research and poking around, I remembered that the SAP Provisioning Framework_Folder.mcc file is actually XML code.  So I went through and searched on the phrase “custom_generateHRID” and found exactly one reference (Highlight added):

      <GLOBALSCRIPT>
         <SCRIPTREVISIONNUMBER/>
         <SCRIPTLASTCHANGE>2007-10-04 12:52:52.7</SCRIPTLASTCHANGE>
         <SCRIPTLANGUAGE>JScript</SCRIPTLANGUAGE>
         <SCRIPTID>67</SCRIPTID>
         <SCRIPTDEFINITION>{B64}Ly8gTWFpbiBmdW5jdGlvbjogY3VzdG9tX2dlbmVyYXRlSFJJRA0KDQpmdW5jdGlvbiBjdXN0b21fZ2VuZXJhdGVIUklEKFBhcil7DQoJcmV0dXJuICIiOw0KfQ0K</SCRIPTDEFINITION>
         <SCRIPTLOCKDATE/>
         <SCRIPTHASH>a2b6834ea85aff0bae2559222d861c78</SCRIPTHASH>
         <SCRIPTDESCRIPTION/>
         <SCRIPTNAME>custom_generateHRID</SCRIPTNAME>
         <SCRIPTLOCKSTATE>0</SCRIPTLOCKSTATE>
      </GLOBALSCRIPT>
So being the intrepid guy that I am, I deleted the highlighted line and tried the import again.  It worked like a charm!  Not sure what to take away from this, but I’m glad I solved the problem.  Has anyone else seen this problem and solved it a different way?

VDS Logging

The astute observer will notice that the most recent releases of SAP’s NetWeaver Virtual Directory Server are missing the logging control buttons. There is a  very good reason for this seemingly missing functionality.  Much like NetWeaver Identity Management, VDS is also merging into the NetWeaver, specifically NetWeaver’s logging framework.  This means that there is not a need to have VDS offer internal logging.

However, VDS also offers the ability to run in a “Standalone mode” which allows for VDS to run independently of NetWeaver.  If you plan on running in this mode you’ll need to take advantage of the following configuration tweak in order to access the logs:

Update the file standalonelog.prop that can be found in the Configurations folder.  If you do not have this file, information can be found in the SAP NetWeaver Idenity Management Operations Guide. This document can be found on SDN. The file is a basic text file that includes setting the log level and desired location of the log file.

Once this file is configured it needs to be placed in the Work Area folder (typically underneath the Configurations folder.  Note that creating this file will not bring the buttons back, it will only create the logs in the paths specified in standalonelog.prop.

From what I understand the internal log viewer will be back in the next Service Pack for VDS.  It will be good to have it back.

Whitepaper – Creating a multi-step workflow for a Netweaver IDM 7.0 Workflow task

I’ve written a whitepaper that describes how to create a multi-step workflow for a Netweaver IdM 7.0 Workflow task, using a modal dialog window and Javascript; The hope is to improve the overall usability of IDM 7.0’s workflow tasks.

Link to whitepaper

Project Scope and Sustainability

(This post was written by Matt Pollicove)

One thing I’ve noticed when talking to people about Identity Management projects involves how to determine the project’s overall scope.  “How do I scope this?” they will say to me.  Now that’s kind of tough to answer right off the cuff, especially when considering Pollicove’s Law of Provisioning  which basically says there’s no guarantee that companies that are in the same vertical will work the same way.

 

However, I think there are some best practices that can be worked with when considering implementing an Identity Management solution:

 

1.       Make sure you have executive sponsorship.  C-level support is going to be important in balancing the needs of your stakeholders and their budget dollars.

2.       Make sure you have a good plan of what you want your Identity Management solution to cover.   An essential part of this is conducting a thorough assessment.  Document everything, diagram existing processes, then take them apart and put them back together the way they should be, then do it again.  This should be done with a combination of internal and external sources.  Internal resources know how current systems are configured and interact.  External resources will offer an impartial assessment of how these systems can interact more efficiently. External resources will also be helpful in determining which Identity Management products will work best in your infrastructure.

3.       When building your plan, know where you’re starting from.  What will be used as your authoritative store?  How will it be built?

4.       Where are you going to? What will you provision to?  What will you control access to?

5.       How are you going from 2 to 3?  How will you engineer your changes?  What will the phases of your project consist of?

 

Item 4 is probably the most important part of the process.  Many a project has suffered due to overreaching phase objectives. Carefully define what you want to achieve in each phase. 

 

Data cleansing and analysis is almost always your first phase.  If you don’t have clean data, you won’t have a clean project.  Future phases can deal with:

·         Creating an Authoritative Store

·         Provisioning to essential systems

·         Password management

·         Role management

·         Provisioning to secondary systems

·         Etc.

 

So the big question is what order does this happen in?  How long will it take?  I always suggest attacking “low hanging fruit” first by attacking the easiest objectives that will show the biggest net gain?  As a part of number 1 above, think about solving automation needs, compliance needs, addressing password management request costs to the helpdesk? 

 

How long will it take?  As long as it has to.  This is going to be a major project that will affect many systems and departments.  Take it slow and easy.  Test it thoroughly and make sure there’s a good knowledge management/training initiative to let your users know what happening and how everyone will benefit.  It’s never good if your users equate an Identity Management initiative with a foul tasting medicine.  This includes your stakeholders.