Overlooked Risk in Middle Tier M&A?

If you have ever been part of a big public company merger then most likely the merger included an audit and review of the IT assets, principally those that provide the accounting and reporting.  Post merger and before the two merged companies are interconnected there is also a review of the security policies in order to determine risks and gaps that could lead to compromise.  If there is a large difference in policy, the interconnectivity can be delayed until the security differences are corrected and verified.  This behavior is prudent.  Data compromise can damage a company’s carefully guarded reputation and lead to significant losses.  Beyond loss in sales it can also drive the stock price down.

Private Equity firms that buy and sell companies in the middle tier are strongly focused on the financial health of the company they are purchasing.  Certainly financial health indicates a well run company. Hours are spent structuring the deal and ensuring they know what they are acquiring.  No one wants to be defrauded.  From a sellers perspective they want a high asking pricing and zero encumbrances.

From what I have seen, both buy side and sell side are paying little attention to either information security or physical security risks.  This, even though middle tier companies tend to have fewer resources and are more likely to have major security gaps, whether within their facilities, or their network infrastructure.  Consider a scenario where you are either buying or selling a company and it has been compromised and the hackers are quietly laying in wait, collecting additional access credentials and elevating privileges.  Over time they will be able to exfiltrate all intellectual property.  In the case where the hacking is being done by a state actor it will be shared with domestic competitors.  If this is a platform company, that has been built up over several years, this amounts to a staggering loss of value. The buyer is accumulating exposure in the same way someone who sells naked options without holding the underlying asset accumulates exposure.  The same can be said for the supply chain where down stream providers of services connected into the network increase the size and diversity of the threat landscape.  A compromise within this system if not properly secured could bring down years of work and destroy any equity built.  Any time one is purchasing or selling  a company, he should take security exposure seriously and hire the teams necessary to do a thorough review.

I frequently hear people say that a business ending compromise is a rare event.  How rare or improbable an event is, matters less than the consequences of it occurring.  You can’t zero out risks, of course, but you should follow what works.  If one is not already doing this I recommend the list below.  It applies to domestic acquisitions within first world countries.  Cross border buys add additional challenges (e.g. FCPA exposure) but this list will still apply at the macro level.

  1. Thorough review and harmonization of security policies.
  2. Reciprocal audit agreements with 3rd party suppliers in place.
  3. Thorough review of security controls.
  4. Conduct a network vulnerability assessment covering both internal networks and boundaries.
  5. Perform a penetration test (physical and digital).
  6. Look at patch management processes.
  7. Review identity management practices and access control.
  8. Code audit of custom mission critical applications.
  9. An up to date threat model.
  10. Physical security audit.

Measuring Risk Objectively?

In order to manage the complexity of life and the accompanying uncertainties, we build models.  Models by their very nature are reductions, that is, we throw out a certain amount of information.  A historian writing a history of Frankfurt, Germany is not going to concern himself with spots on the floor of the Rathaus in 1888 (unless he is a post-modern reductionist).

Risk is itself an abstraction, it is certainly not real.  Being the victim of a  specific risk, however, is real enough.  A more interesting topic is whether or not risk is objective or subjective.  How we measure matters.  It may impress to show on a slide that the mail gateway anti-virus blocked ten million attempts in the last year, but it matters little when the consequences of a single failure can end the business.

The U.S. legal scholar Cass Sunstein, who coined the term “libertarian paternalism” has commented on how small risks can become distorted in the mind of the public and amplified to the point (normally via mass media) that they influence public policy.  He uses the terms “availability cascade” (from the availability bias) and “probability neglect” to describe the basis for the entire process. The exact same thing happens in any organization where one bad experience leads to ridiculous changes in policy.  In the US think Love Canal or Times Beach.

So when we model a certain risk, it is often driven by emotion or prejudice and key elements are included/excluded.  It may take years to identify the errors.  I could be wrong but I do not think that risk can be measured objectively even with panels of experts since they are subject to the same problems as the lumpenproletariat they feel superior to, bias, group-think, emotional amplification, poor statistical reasoning, priors etc. Because of this, I agree with Paul Slovic, risk is subjective.

Blind to Collapse

There is a very good article on the fall of the Roman empire by Ugo Bardi (h/t Vox Dei).  He applies system dynamics to explain the collapse.

Let’s start from the beginning…with the people who were contemporary to the collapse, the Romans themselves. Did they understand what was happening to them? This is a very important point: if a society, intended as its government, can understand that collapse is coming, can they do something to avoid it? It is relevant to our own situation, today.

In business people use the myth of the boiled frog to explain our inability to see and adapt to the deleterious effects of change.  And while there are those who unflinchingly pursue the truth, they may be only recognized as such in the post collapse analysis.  Decline is inevitable; the venal, the power hungry will eventually seize control (it’s for your own good or the children), the virtue of a culture will be replaced by hedonistic calculus,  technological sophistication reaches its zenith while education its nadir, and everyone tries to saw off a limb while the tree falls.  “I got mine.”  Yes, you did.

Regardless of  how much knowledge we accumulate, no matter how many collapsed civilizations, technological failures or business cases we study, there will always be a new generation who, as Russell Kirk described, “are like flies of the summer” caring little what went before them and nothing for what comes after, who are curious only and I mean that in the medieval sense.   The more complex a system becomes the higher cost to maintain the status quo.  Eventually complexity reaches the point that the problems become insurmountable and from what I have seen the more centralized the decision making authority, the faster the demise.

Kindle Fire Will Not Buy Us Much

There is a post over at Volokh Conspiracy where the author Stewart Baker believes that the Kindle Fire users will ultimately be more secure because Amazon is acting as a big http proxy and by running everything through Amazon’s cloud it will reduce the risk of end point compromise.  Instead of relying on your own ability to protect your device, Amazon will do it for you assuming that they are more knowledgeable than you in information security related matters. I am not nearly as excited for the following reasons:

1.  Without physical security there is no security.  If one has physical access to a device it is quite possible to subvert for nefarious purposes.  Amazon cannot control that and if done well they will not be able to detect it.  Ask Apple about all those jail broken iPhones.  Additionally, many exploits rely on social engineering.  All the hardware in the world cannot stop you from making an error.

2.  A big filtering proxy in the cloud is just another filtering proxy.  At some point its pattern recognition systems will have false positives enough times that users will work overtime to get around them; that, or Amazon will have to loosen up the filtering.  One commenter pointed out correctly that AOL tried this before.

3. Amazon Fire will have its own broswer, which will have its own browser flaws and security problems.  That is inescapable.  Roll your own browser and you have to do your own code audits too, every change introduces new risks and possible regression errors.

4.  Risk will be more non-linear.  We may have fewer security problems but the impact of one will be far more severe.  A heavily defended system is always complex and when a complex systems goes down it is frequently catastrophic in consequence.

5.  Security is expensive.  The more complex the system the higher the cost to defend it.  Security is frequently one of the first areas relaxed when costs are creeping up.  This is normally followed by a security failure of some kind, the firing of the security personnel, an increase in security spending on technological solutions and a return to the beginning.  Think of this as the security personnel scapegoating life cycle.  Amazon will not be immune.

In the end, the entire Amazon Kindle Fire ecosystem is just another system requiring defending with the same kinds of problems as other systems.  I do not share the author’s optimism.

XAMCL? No Thanks

XAMCL? No Thanks

That there are no new problems seems widely understood (save for the child and naïf) but it seems rarely do people bother to understand the historical solutions to these problems, that is to say, we focus almost exclusively on the facts of the problem without ever bothering to look at the principles or rules that may already be understood.  This kind of reflective thinking, along with analysis of principles derived from the experience of our predecessors whether extant or having suffered debitum naturae, extracts a large cognitive cost.  “Math is hard,” the philosopher Barbie once observed, as is all real analysis.

What we frequently do, because it extracts a low cognitive cost, is simply to allow things to move in the direction dictated by the promoter with the large megaphone, to prattle on mindlessly like a child, to ignore what has gone before, to ignore what theory there is and prefer the clustering of like minded people even if this is nothing more than a coterie of idiot enthusiasts.  It is easier to sit on the band wagon collecting money with all the other simpletons, than to go against the flow and think for yourself.

Nothing embodies this more than the widespread use of XML for things which it is poorly suited, especially data management.  In its early stages there were vigorous arguments against adopting it, but logic and reason are no match for fads backed by large corporations motivated by “innovation”, and quarterly results.

In proposing to use xml as the common “language” of security policy the authors of the specification write the following:

“XML is a natural choice as the basis for the common security-policy language, due to the ease with which its syntax and semantics can be extended to accommodate the unique requirements of this application, and the widespread support that it enjoys from all the main platform and tool vendors.”

This is specious reasoning if it can be called reasoning at all.  Can anyone show me a text based format that can’t be extended to accommodate the requirements of an application? In the second half of that sentence they note that xml has widespread “support.”    Socialism had widespread support among the intelligentsia,  but it doesn’t work well either.  To exchange data we only need to agree what to pass and what it means.  All real meaning exists in the hemispheres of the brain.  Since logic ignores context, the meaning is documented so we are not left to speculate.  If that view, that concept is missing we are stuck with speculation.  Anyone who has tried reading uncommented code or peered into a database without knowing the conceptual model, know this well.  Nearly all the early claims of xml’s benefits (especially about meaning and tags) have been abandoned and we are left with these two, everybody does it and I can make it do anything.

A while back there was a question posted on a Linked-In group titled “Is Role Based Access Control a dead end and Rule Based Access the future?” inevitably several said the answer to the problem is XAMCL. I don’t think so.  What drives the problems with role design versus using rules are really fundamental philosophical questions of categorization and classification (distinctly different concepts) and how we manage complexity.  To say the solution will be adapting yet another complex xml standard is laughable.  It really shows that one does not understand the fundamental nature of the problem. Maybe xml is the way to go but I doubt there was much reflective thinking before they started writing.  My best guess is that XAMCL will be as widely adapted as SPML and most likely will spawn efforts like this for the same reasons.

SailPoint Overview Part II

SailPoint began their product with a governance model instead of starting with provisioning.  I think this gives the product a distinct advantage.  Rather being focused entirely on a select group of technical employees and making their lives easier, they instead focused on the business initially and now they are bringing in provisioning elements.  It is much harder to bolt on re-certification and role analysis to an existing product then add provisioning.  I also like their approach to role management which is both top down and bottom up.  As has been pointed out by Gregory in this post, just doing bottom up role mining is a mistake since many people have access they never use.  In the next couple of blog posts I will highlight some specific features of the product.

SailPoint IdentityIQ Quick Overview

I had the opportunity courtesy of CTI to train on the SailPoint IdentityIQ product.  I was impressed with the thoroughness of the product.  They are not narrowly focused but offer the  means of nailing down your application identity certifications while insuring segregation of duties and least privilege.  This product covers the enterprise and is not  just an IT ecosystem like SAP GRC.  If I have a complaint it is that it relies on too much XML when setting up an application.  XML is nearly useless with its insistence that data must be modeled as 1:N.  The brain may love hierarchies but XML with all it’s tags and so little data makes hierarchies a headache to work with.  Their developers seem to sense this too because they have moved some areas around web services to json as opposed to SOAP, an approach I have had my fill of.  If enterprise governance is a requirement for you, and you find yourself failing audits, be sure to check out SailPoint.  <shameless plug>Then call Matt Pollicove (who blogs here from time to time) at CTI when you need help implementing.</shameless plug>

‘Smooth Sailing’ Fallacy in ERP Security

I just read an essay with interesting observations made by Richard Rumelt in McKinsey Quarterly. He says “There’s been a dramatic failure in management governance. And so our basic doctrines of how we manage things are in question and need revision.

At the heart of this failure is what I call the “smooth sailing” fallacy. Back in the 1930s, the Graf Zeppelin and the Hindenburg were the largest aircraft that had ever flown. The Hindenburg was as big as the Titanic. Together these vehicles had made 620-odd successful flights when one evening the Hindenburg suddenly burst into flames and fell to the ground in New Jersey. That was May 1937.

Years ago, I had the chance to chat with a guy who had actually flown over Europe in the Hindenburg. And he had this wistful memory that it was a wonderful ride.  He said, “It seemed so safe.  It was smooth, not like the bumpy rides you get in airplanes today.” Well, the ride in the Hindenburg was smooth, until it exploded. And the risk the passengers took wasn’t related to the bumps in the ride or to its smoothness. If you had a modern econometrician on board, no matter how hard he studied those bumps and wiggles in the ride, he wouldn’t have been able to predict the disaster. The fallacy is the idea that you can predict disaster risk by looking at the bumps and wiggles in current results.

The history of bumps and wiggles—and of GDP and prices—didn’t predict economic disaster. When people talk about Six Sigma events or tail risk or Black Swan, they’re showing that they don’t really get it. What happened to the Hindenburg that night was not a surprisingly large bump. It was a design flaw.

To see the disaster coming, you had to have looked beyond the data about flight bumpiness—beyond the professionalism of the staff—and really think, “Does it make any sense to have people riding in a gondola, strapped to a giant sack of flammable hydrogen gas?” There’s just not a data series that lets you think about that. But it’s not that hard to think about.

If we apply this logic to SAP Security – I find many SAP customers suffer from the Smooth Sailing fallacy. ‘Well – we implemented SAP 10 years back, IBM is managing the support, we have no problems! Our Security incidents are insignificant.’ ‘OH we have installed SAP GRC solutions but no one uses them!’

This smooth-sailing fallacy in IS Security arises when we mistake a measure for reality. Competent management always looks deeper than the numbers, deeper than the current measures. Incompetent management just focuses on the metrics that are based on past reality. And that’s how we get into these troubles. We really have to think about the redesign ERP and SAP security & its measurements. This lesson is fundamental: you cannot manage by just looking at the results meter.  You have to have a big picture view of Security by applying constant changes in security protocols and metrics. That means your Security policy which may be 5 years old is useless and you have no security in place.

CEOs and CFOs will use the smooth sailing argument – Hey! We never had a security issue in the past 2 years? So why now?

You have to show them what Rumelt said about Hindenburg! A small design flaw can blow them out of the window.

Steve Balmer on Efficiency & Decision Making

There is an interview with Steve Balmer in the International Herald Tribune and he makes a statement in response to a question about what’s it like to be in a meeting with him to wit;

I’ve changed that, really, in the last couple years. The mode of Microsoft meetings used to be: You come with something we haven’t seen in a slide deck or presentation. You deliver the presentation. You probably take what I will call ‘‘the long and winding road.’’ You take the listener through your path of discovery and exploration, and you arrive at a conclusion.
That’s kind of the way I used to like to do it, and the way Bill [Gates] used to kind of like to do it. And it seemed like the best way to do it, because if you went to the conclusion first, you’d get: ‘‘What about this? Have you thought about this?’’ So people naturally tried to tell you all the things that supported the decision, and then tell you the decision.
I decided that’s not what I want to do anymore. I don’t think it’s productive. I don’t think it’s efficient. I get impatient.
So most meetings nowadays, you send me the materials and I read them in advance.
And I can come in and say: ‘‘I’ve got the following four questions. Please don’t present the deck.’’ That lets us go, whether they’ve organized it that way or not, to the recommendation. And if I have questions about the long and winding road and the data and the supporting evidence, I can ask them. But it gives us greater focus.

There is a lot of missing information that I wish the interviewer had followed up with but let’s assume a charitable course.

What Mr. Balmer says does not really tell us anything about efficiency,  but speaks volumes about his mind.  He states quite clearly he is impatient and the does not like the “long and winding road”  Most likely this because he does not learn well or efficiently sitting through a presentation.  It could also be that he is intellectually lazy but this seems unlikely.   If he really  is intellectually lazy then most likely Microsoft will perform poorly under his leadership.

Note that he recognizes that Bill Gates took the “long and winding road”.  That should tell you something and if we want to go back in history and look at great  leaders they did too:  Andy Grove, Andrew Carnagie, General  George Patton, General Douglas McArthur to mention a few.  The ability to sit and listen with attention to detail does not mean analysis paralysis, it means understanding the situation properly, the context and the interrelation of it’s elements.  It means avoiding a specious understanding.  Perhaps he is doing this but it is not clear.

He states that he gets the information in advance and let us hope he did not mean in PowerPoint slides.  There are serious limitations to the kinds of information that can be put into slides.  The overwhelming majority of information in a slide deck is distilled and frequently lacking context.  This information must be communicated and explained verbally.  You wouldn’t read the table of contents of a book and draw conclusions.  Yet, if you are reading PowerPoint that is exactly what you are doing.  Its focus is on the presenter,  not on the audience and not on the content.  There is a “sales pitch” aspect to PowerPoint that destroys neutral fact based information.

Now the downside to this interview and its  lack of clarity is right now somewhere in America a mediocre manager who prides himself on efficiency  is out there somewhere instructing his subordinates to send him a slide deck in advance and he’s drawing up his four questions because Balmer uses PowerPoint in advance and four questions.

Finally, we will never really know if it is more efficient.  If he had recorded all of his decisions under the ‘ “long and winding road” ‘ method and then recorded all his decisions under the “efficient” method we may have learned what works best for Balmer.  We will certainly never learn what works best for everyone else, unless they start recording their own decisions.

note: updated for typo