Useful NW IDM Functions

Recently I needed to do two types of lookups for the prototype of a user creation script.  I was able to do both lookups rather easily using the uSelect function.  This is a function that is often overlooked, but one that really lets a NetWeaver IDM developer get the most out of the Identity Store.

First off, I needed to do a lookup against a “blacklist” of illegal ID’s  I was able to create a small table in the Identity Store database, gave it proper rights and then execute the following JavaScript code:

// First -- Check if AlphaPart is in the BlackList, if EvalBlackList > 0 then it's no good
// This means the item was found in the Blacklist
var BlackListQuery = "SELECT COUNT(*) FROM _Blacklist WHERE (BannedWord = " + "'" + AlphaPart +"')";
var EvalBlackList = parseInt(UserFunc.uSelect(BlackListQuery)); 
// if the current AlphaPart is in the Blacklist
if (EvalBlackList != 0){

Basically a query string needed to be constructed and then evaluated by uSelect.  The easiest way to check for a conflict is to use a SELECT COUNT type of query of the “Alphalist” variable against the Blacklist table.  If there is a value greater than 0, we have a conflict.  I then needed to do another query where I had to make sure that the new MSKEYVALUE did not conflict with existing values.  The same technique is used except this time we are going up against a view in the Identity Manager database

// Check if this value exists as an MSKEYVALUE
// NOTE IDENTITY STORE VALUE MUST BE HARD CODED!  NW IDM will not allow IDS Loopup via uGetIDStore outside of action tasks.
var IDSNumb = "1";
var MSKEYQuery = "SELECT COUNT(MSKEY)FROM MXIV_SENTRIES WHERE (IS_ID =" + IDSNumb + ") AND (AttrName = '" + "MSKEYVALUE" + "') AND (SearchValue LIKE '" + NewKey +"')";
var EvalMSKEY = parseInt(UserFunc.uSelect(MSKEYQuery));

This query was a little bit different as it needed some more elements involved.  First off, we need to hard code the Identity Store value.  There seems to be an issue with uGetIDStore if it is used outside of an action task.  I will be following up on this with folks at SAP.

// if the current NewKey is in the Identity Store
if (EvalMSKEY != 0){

Next interesting thing to note is that the query runs off of the MXIV_SENTRIES view. One might think that MXIV_ENTRIES would be the logical place to start, but it links into a lot of other tables, particularly audit tables which would slow up the script to an unacceptable degree.

I hope these techniques are useful to other NW IDM scripters out there.  My thanks to those engineers both in and out of SECUDE Global Consulting who helped out on this.  I’d be interested in seeing other people’s views on this issue and other scripting issues.

Issues with setting an AD password thru Identity Center

The problem:
Setting the password for an Active Directory account through the NetWeaver Identity Center.

Possible Solutions:

Solution 1# IdM Template job to set AD password using VB script.
Issue 1.1: The most frequent error and by far the most vague was the “Error Object doesn’t support this property or method”. It never told us anything more than that; we never understood what the object was nor what property or method it didn’t support.

Issue 1.2: The other error we encountered was ” Error Failed to set password on user . Error no:-2147016654.”; this normally occurred only when running the script in TEST mode. Details about the error number pointed to it possibly being an error due to insufficient privileges.

Solution 2# A custom VB script to reset the AD password, called from a generic pass within Identity Center; we used Richard L. Mueller’s script, which can be found on his website @

Issue 2.1: This script worked perfectly from the command prompt on the APP server, but failed when we tried running it through a generic pass within the Identity Center. It failed with the same error mentioned in Issue 1.1.

Solution 3# Using a shell execute pass, run a ‘dsmod’ command to reset the AD password.

Issue 3.1: Like the script used in the previous solution, it worked perfectly fine from a command prompt, outside the Identity Center, but it failed to set the AD password when executed through a shell execute pass within the Identity Center. There was no real error in the job log; the job apparently ran successfully, but it probably just failed within the external command prompt with a ‘dsmod’ error.

Solution 4# Use the command prompt ‘runas’ command to run the ‘dsmod’ command as the domain admin.

Issue 4.1: Again, this worked perfectly on the system, outside the Identity Center, but it failed when executed through a shell execute pass within the Idenity Center.

Solution 5# At this point I thought it might’ve been the direct nature of executing the scripts or the dsmod command that might be preventing me from having the necessary privileges; thinking about it, being logged into the system as a domain admin, anything I run against the domain should ideally work, but this was apparently not the case. I thought if I tried executing either the script or the dsmod command in a more indirect way, it might work; say, executing a batch file that in turn executed a vbscript file that was created on the fly, containing the ‘dsmod’ command which was being run using the ‘runas’ command through a wscript shell, all through an internal script from within the Identity Center. Yes, I know; that’s a terribly long and inefficient way of accomplishing the task, but I was pushed to the limits of frustration.

Issue 5.1: Even still, after all that nonsense, it ran perfectly within tne system, but it failed to actually set the password from within the Identity Center. Again, the job ran successfully, but just failed within the external command prompt with a ‘dsmod’ error.

All this while, the common theme seemed to be that any solution would work on the base system, but would fail when run from within the Identity Center. Why? The security context is apparently different. Even with enforcing my admin privileges through the Identity Center it still failed. The only solution that would possibly work at this point would be to create a new dispatcher, that would be solely used for setting AD passwords, and use it with the existing set password template job (only with a minor change). Once it’s installed as a service, change this dispatcher service’s logon credentials from the local system account to your domain admin credentials and choose to run your password set job with this new dispatcher instead. Before running it, edit the pwdNext script to remove the line that changes the AD password. Not removing this line will result in an error; we need to simply SET the password and end there, rather than going on and changing the password also. This should now hopefully allow you to successfully set AD passwords. To fine tune this solution a little more, create a service account with only the ability to set passwords and use those credentials within the new dispatcher service. In an ideal environment you wouldn’t have waste all this time; the original template job should work well as is. But if you have to deal with an environment as picky as ours, and none of the previous five solutions work, this would be one solution that would possibly work.

Useful SAP NetWeaver Identity Manager Documentation

Here’s a link to a great document on the Identity Manager Schema.

I’m looking forward to seeing more of this type of documentation from the NW IDM Development team.  This is a great listing of Entry Type definitions, attribute listings and ABAP mappings.

I’m looking forward to also seeing a document that discusses useful stored procedures and table descriptions as well, but this is a great start. 



Lighthouse Anyone?

SAP (the company) has started an initiative called ‘Lighthouse’ to actively market pilot projects for NW IdM 7. Customers who are interested will receive ‘special’ ramp-up benefits and are asked to share lessons learned and pain points during the implementation of the product. Might be a good opportunity for some to bash on consultants like us … No, seriously, anyone interested? Leave a comment if you are.

Coda: Unstructured Data

As promised this the last time I will ever write about the oxymoron unstructured data; I already feel like a Harpy in The Wood of Self-murderers except I am torturing those who have committed intellectual suicide.  Some of you reading this may find this post harsh but as Richard Bandler observed you only need to feel insulted if it applies to you.

Google unstructured data and it returns 700,000 hits.  Wikipedia starts with this definition:

Unstructured data (or unstructured information) refers to masses of (usually) computerized information which do either not have a data structure or one that is not easily readable by a machine.”

The hyperlink to data structure says this:  “In computer science, a data structure is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data.”

I couldn’t make this up if I wanted to;  so according to the Wikipedia author(s) the only thing that qualifies for unstructured data would be a pseudo random number generator, books, magazines, and printed documents.  I can’t imagine this is what he had in mind when defining the term.  He most likely used unstructured knowledge to write it, that and a crayola.

Those with whom I have argued over this term tend to respond in three ways.  For some when I have shown them the denotative nonsense of it , fall back to its connotative meaning.  “I agree with you; it is an oxymoron but the term is useful to distinguish between data in databases and the all the other data”  A utilitarian argument for a term whose descriptive power is based on a shared hallucination of meaning.   Apparently once you have gathered a sufficiently large number of idiot enthusiasts you have a powerful semantic swarm for persuading the me-too masses.  Others, less bright or perhaps intellectually honest will argue that when you put the words together they take on new meaning, like mixing colors I suppose or the German compound noun and LSD. Now, you really can see the meaning.  Finally, there are the invincibly ignorant whom no argument conceived could convince otherwise.

What has this term contributed to the advancement of computer science, to knowledge, or to decision making?  Anyone?  As far as I can tell it has only contributed to marketing, to pseudo-intellectual posturing, pretense of knowledge,  the selling of quackery and of course, as always, a PhD thesis.

Risk Management and Information III

I closed part II of this post with a series of assumptions in order to move the discussion forward and ended with asking a rhetorical question of what should be done to protect and control information.  In some cases the answer is very little.  This may seem like apostasy for someone whose career has been in information security but stringent controls on information can be just as damaging as too little.  The strictest policies I ever seen on email and web usage were a local government bureaucracy, not exactly a hot bed of original thinking.

So in the management of information we have paradox, a sort of  Laffer curve of information security.   Paradox is a well known characteristic of complex systems and multivalued logic.   Slow is fast, low risk is high risk,  total knowledge is uncertainty etc.  Think of the ‘wall’ between intelligence and law enforcement assets that existed prior to 9/11 here in the US.   When we encounter paradox it provides an opportunity for original thinking between periods of grinding our teeth.  How long did Zeno’s paradoxes stand before they were solved by Newton/Leibniz with the creation of calculus*?

Information security wouldn’t be necessary if people could be trusted.  The only reason to spend money on security is to control, contain, or stop the damage of  malicious people.  Information stored in database management systems are easy to control, however, other information in the form of instant messaging, spreadsheets, emails,  word processing documents are far more difficult.  There are content management solutions, digital rights solutions, IM gateways and the like.  How often do you find them utilized?  Frequently, it is used piecemeal, a knee jerk response to a consistent minor problem.  Despite this massive army of technical solutions, you rarely find a cohesive, integrated deployment of the technologies in the corporate landscape.  From this result we can draw one of the following conclusions, the risk is so low it makes no sense to spend the money, we are aware of the risk and impact but don’t believe it will happen, or we are uninformed, that is, ignorant of the risk.  It is difficult to convince some one to take measures against a risk that only happens once every 10 years.  Anti-virus software wasn’t deployed in a lot of companies until email viruses took down their systems with regularity.  If I give you a paper cut everyday, you won’t wait too long to take action against me but If I beat you in the parking lot with an ax handle once every ten years, you will be far more lax in your defensive posture despite the enormous difference in damage.

When we have information properly classified by the information owners, it is necessary to think through who the other stakeholders are, and speak to them before we start the lock down process.  Who may find this information useful and valuable besides the group whom created it?  What is their perspective?  What of others who may wish to acquire access to this information ad hoc because they believe they can add something?  Soliciting these other views allow us to make reasonable judgments about sharing and use of information.  Once the foregoing is done we need to consider whether our annual expected loss and the probability we are dealing with is a regular paper cut or a body damaging beating.  Here is the point we have reached:

  • Identified external and internal risks across the value chain
  • Prioritized the risks
  • Risk processes are aligned with goal setting
  • Proper task organization and structure
  • Information is classified
  • Annual expected loss and probabilities estimated

Now access controls can be placed on the information.  If high value information is not already stored in database management systems, create the conceptual models and create the database.  This gives integrity and tighter controls.  Other forms of data may be stored and secured by application whether exchange offline databases, content management systems, and the like.   Identity management workflows can be set to handle access requests.

Areas of beneficial research would be intelligent agents (perhaps based on the work of Stephen Thaler) for permitting temporary access using stored individual models of existing permission grants.  Modeling of the dynamic systems looking for equilibrium between unrestricted and locked down information.  Also better methods for creating and reducing knowledge into conceptual models and stored and queried against truly relational database management systems.

There is quite a lot of work listed in these three parts that just isn’t being done in a disciplined way, or in many cases, done at all.  The majority of this work does not require heavy investments in new technologies but it does require business process realignment.  Marco I hope I answered your question.

* Philosophically they may still be unresolved.

Risk Management and Information II

I really wanted to write this sooner but I am on a project currently.  In my previous post I raised some questions for Marco concerning three points of his post (see Risk Management and Information).  He responded addressing each one.  Concerning my criticism about “unstructured data” he chose to accept the use of the term in its connotative usage.  I will make one final post on “unstructured data” and it’s  the last thing I will say about it.

Marco goes on to reference a paper he and fellow researchers have published, and more than anything else, after I read it on the plane, the context for his post was clear and it eliminated any misunderstanding.  If one has an interest in Identity Analytics, it is worth reading.  They look at using mathematical modeling to provide guidance, predict the impact of policy choices to enable better decision making.  At the end of his post, he asks me the following:

It would be of some interest to the readers of this blog if this statement could be elaborated (specifically in the space of IdM and information management) along with providing some recommendations/input/directions (hopefully beyond having to hire a consulting company.

I will attempt to answer that question while staying clear of methodology.  There are obvious constraints I have in my current position.  Personally, if I was independent I would publish the entire thing for one very simple reason.  Ideas are easy, execution of ideas difficult.  Twelve people would read it and the majority  would fail to implement it properly.  This is the way the world works. Great script a movie does not make.

Before one looks at information in all it’s forms, what is purpose of risk management?  From my perspective it’s taking the knowledge that one has about how the world works and translating that into prudent decision making where they hope success is greater than failure.  In business as in life in general, there is nothing more important than proper decision making.  The entrepreneur, the executive in a large corporation will both make decisions with less than perfect knowledge, some good, some bad.

So in order to make prudent choices and decisions, businesses need an understanding of both their exogenous and endogenous risks across the entire value chain.  The determination of risks neither precedes or succeeds the setting of business goals, but rather is temporally concomitant with goal setting.  Business goals are set with feedback from an existing dynamic environment,  and as the environment evolves,  the risks evolve, and the identification of changes in  those risks should (but frequently don’t) act as a negative feedback loop to activity.   The distribution of risks themselves, can be broadly placed into two domains, those that exhibit a Gaussian or normal distribution and those that are scale free or follow a power law distribution.  It’s not always easy to know which one, one is confronting.  Upper and lower bounds could be based on insufficiently small sample sizes.  Errors in decision making, even small errors in scale free networks can have a devastating consequences; just ask the former employees of Bear Stearnes.

Businesses need access to knowledge that will allow them to innovate, create and make prudent decisions.  Some of this knowledge confers a competitive advantage and some of it does not.  As I said in my previous post, the first order of business is to classify the information we have.  If we have not determined the relative importance of this information we do not know what we need to protect.  One could easily find themselves like a mad reductionist historian satisfied to study the stains on the library wall while genuine knowledge gathers dust on shelves.  One must confront the problem of scarcity which concentrates efforts into protecting only the priority areas.  It is not possible to mitigate every risk to the enterprise.

The arrival of specialized information security practitioners into many corporations came with the advent of the the internet.  The corporate fortress gave way to a walled city.  In many companies information security has nothing to do with explicit risk management.  It’s effects lower broader risks in  a piecemeal fashion.  Many infosec personnel just watch the border or set toothless policy.  It wasn’t until legislation forced changes that many companies developed real processes.  Companies who are not impacted by legislation continue with sloppy practices.  I see it all the time.

Given the foregoing, before one looks at sophisticated controls of information, it should be obvious that there is a lot that can be done better.  Assume for the sake of discussion, that the corporation has identified its external and internal risks across the value chain, it’s risk processes are aligned with goal setting,  it has proper task organization, and it has structure that permits enterprise risk management.  What should one do to protect and control their information?

I will continue this in my next post; given the length of this one.

2008-09-16 – edited the opening to clarify some ambiguities.

A Welcome thing to see

I was most pleased to have this brief article cross my vision today. As important as it is to have organizations develop IdM strategies, I believe it’s even more important that organizations develop ways of discussing how IdM should be used throughout an industry.

There is a certain economy of scale that can be achieved when people in a given industry come together to solve common problems, hence the success of Burton Catalyst, RSA, DIDW and other large security/identity conferences and presentations.  Since to my knowledge there are no CIdO‘s out there, we’ll just have to keep it at the CIO level for now.

Risk Management and Information

Marco Casassa Mont has a blog post titled Risk Management for Unstructured Data in Enterprises where he states that he has been exploring  “the implications of ‘unstructured data'”.  Before we address the larger questions in his post let’s make proper distinctions.  Unstructured data does not exist by definition.  What people mean in their use of the oxymoron is data which uses different data structures. It is obvious that applications for employment, sms messages, e-mail, instant messaging, non-disclosure agreements are all organized differently, but organized none-the-less;  there is nothing unstructured about them.  Their form follows their logical function.   I believe the term gained popularity because it gave the analysts something new to write about, something new to tag on to “at the end of the day”, “that being said” or “it remains to be seen.”  I’m not sure who coined the phrase originally but they probably thought it a  profound insight.

Regardless of its marketing label or it’s form, we have what we had at the beginning, information.   Information that needs to be managed, stored, moved, shared, controlled and contained.  The principles for handling risks with information are well established.  It begins with data classification.  I have seen many intelligent people in large companies chase after the ephemera of “unstructured data” and lack a data classification program.  Without classifying our information, without placing a relative value on it,  we cannot properly prioritize or manage the risks.

Marco goes on to discuss his perception of current approaches and then states this:

I believe this is not enough. Solutions are required to help organizations (and decision makers) to: (1) fully understand the nature of the problem, based on their specific context and environment; (2) have a picture of their overall risk exposure; (3) make informed decisions on which approaches to follow, explain and predict the consequences and define appropriate policies; (3) explore trade-offs.

And if I understood him correctly, his perception of approaches is narrow in scope and his list of required solutions incomplete.  Managing risks to information is a defined problem domain whether that data is stored on hard drives or in filing cabinets, whether it is sent via fiber or courier.  It also one dimension among others where risks need to be controlled.

Marco closes his post with this question:

So far I have found no comprehensive approach/solution providing these features. Is anybody aware of any?

I’m not quite sure what he means by approach/solution or if he means in the public domain but assuming he means any comprehensive methodology for implementing enterprise wide risk management, we do and it covers more than identity and information security.  There are practices in a similiar vein in other consulting firms.  In closing if I have miscategorized anything in his post I encourage him to correct me.

New Whitepaper

The Identity and Access Management team here at SECUDE Global Consulting has created a new White paper called “Strategies for Creating an Authoritative Store”.  The paper is currently being hosted on the Business Trends Quarterly site and will be on our corporate website soon as well.

From the White paper:

Creation of an Authoritative Store is a key component of an Identity Management Infrastructure. The Authoritative Store can be created using a number of different strategies. The determination of the best strategy is by a thorough analysis of sources, database resources, available data synchronization tools and the IAM tools in use by the organization.

In the meantime if you would like to read the paper, please email me at (matthew dot pollicove (at) secude-consulting dot com) or register (it’s free!) on BTQ’s website where you can get more information on BI, GRC and other important technology areas.

I will post an update as soon as the paper is available on the corporate website.