Overlooked Risk in Middle Tier M&A?

If you have ever been part of a big public company merger then most likely the merger included an audit and review of the IT assets, principally those that provide the accounting and reporting.  Post merger and before the two merged companies are interconnected there is also a review of the security policies in order to determine risks and gaps that could lead to compromise.  If there is a large difference in policy, the interconnectivity can be delayed until the security differences are corrected and verified.  This behavior is prudent.  Data compromise can damage a company’s carefully guarded reputation and lead to significant losses.  Beyond loss in sales it can also drive the stock price down.

Private Equity firms that buy and sell companies in the middle tier are strongly focused on the financial health of the company they are purchasing.  Certainly financial health indicates a well run company. Hours are spent structuring the deal and ensuring they know what they are acquiring.  No one wants to be defrauded.  From a sellers perspective they want a high asking pricing and zero encumbrances.

From what I have seen, both buy side and sell side are paying little attention to either information security or physical security risks.  This, even though middle tier companies tend to have fewer resources and are more likely to have major security gaps, whether within their facilities, or their network infrastructure.  Consider a scenario where you are either buying or selling a company and it has been compromised and the hackers are quietly laying in wait, collecting additional access credentials and elevating privileges.  Over time they will be able to exfiltrate all intellectual property.  In the case where the hacking is being done by a state actor it will be shared with domestic competitors.  If this is a platform company, that has been built up over several years, this amounts to a staggering loss of value. The buyer is accumulating exposure in the same way someone who sells naked options without holding the underlying asset accumulates exposure.  The same can be said for the supply chain where down stream providers of services connected into the network increase the size and diversity of the threat landscape.  A compromise within this system if not properly secured could bring down years of work and destroy any equity built.  Any time one is purchasing or selling  a company, he should take security exposure seriously and hire the teams necessary to do a thorough review.

I frequently hear people say that a business ending compromise is a rare event.  How rare or improbable an event is, matters less than the consequences of it occurring.  You can’t zero out risks, of course, but you should follow what works.  If one is not already doing this I recommend the list below.  It applies to domestic acquisitions within first world countries.  Cross border buys add additional challenges (e.g. FCPA exposure) but this list will still apply at the macro level.

  1. Thorough review and harmonization of security policies.
  2. Reciprocal audit agreements with 3rd party suppliers in place.
  3. Thorough review of security controls.
  4. Conduct a network vulnerability assessment covering both internal networks and boundaries.
  5. Perform a penetration test (physical and digital).
  6. Look at patch management processes.
  7. Review identity management practices and access control.
  8. Code audit of custom mission critical applications.
  9. An up to date threat model.
  10. Physical security audit.

Objective & Subjective Risk

Note:  I started this response in 2012 abandoned it and decided today to finish it.

I wrote a post on whether risk can be measured objectively in the summer 2012.  It was not particularly rigorous.  This is the by product of writing hastily.   It was tweeted by Matt Flynn who has is own blog on identity and access management.  Following Matt’s tweet it received three separate tweets critical of the substance by Alex Hutton.  I would like to note the broader point of the article was lost in my haste, to wit, panels of experts are subject to the same errors, weaknesses in reasoning as the rest of us so large top down proscriptions from experts are dangerous. When they are done by your local government, it impacts few people.  When done at a national level it impacts millions and is difficult to reverse.

These are his Tweets verbatim.
alex1 alex2 alex3

Twitter works poorly for substantive criticism so if I have mis-categorized anything I am open to correction.   At the time I responded on Twitter but I wanted to do so more thoroughly.

Let me first note that Mr. Hutton is suspicious of the categories I used (objective/subjective) but hesitates not in specifying his own, of course, but we don’t really know and he doesn’t say how he arrived at those (uncertainty/intersubjectivity).  Categories follow some level of reasoning, the purpose they serve is to help us make sense of reality because experience taught us long ago like kinds of things behave in a similar fashion.   Equally fundamental, we need to make sure that the words we use in a discussion have been defined to the satisfaction of both parties.  In this way, the post was sloppy.  When I stated the terms objective and subjective, I was thinking in terms of a simple dictionary definitions.  Subjective being influenced by personal feelings, tastes, opinion etc. and objective not influenced and hence true.  What I did not mean is that when something is subjective it is devoid of reason and something objective is purely fact based devoid entirely of emotion.  I am not dealing in ideal forms.

Let’s take the word “risk” next.  When I say risk I mean an exposure to danger or something undesirable and that we can to certain degree ancticipate it but not necessarily precisely predict it or  measure it.  On the other hand when I use the term uncertainty, I mean something which cannot be measured.

With that out of the way let us take his first prescription that Objective / Subjective is the wrong fight. I don’t really consider them to be a “fight.”  The broader point is in the realm of civilization, so called experts are subject to the same weaknesses as everyone else so you get unwelcome corporate behaviour like an “availability cascade” whose impact on national scale is more misery than the misery they attempted to avoid.

Western civilization has found it useful to examine things in light of objective and subjective terms for two millennia but apparently they have been wrong.  They should have been using uncertainty and intersubjectivity.  If  the reader is not familiar with intersubjectivity, he should start with Edmund Hussrl.  It has influenced philosophy, anthropology, sociology, psychology and so on.  Unfortunately it comes with its own set of  baggage.  If we are going to use intersubjectivity whose conception are we to use? Hussrl’s? Heidegger’s? Or all the other academics who have weighed in on it since the 1950s?  I would like to add that many of these conceptions conflict with each other so Mr. Hutton’s prescription is incoherent as stated and in need of clarification.  One last point on intersubjectivity, depending on who you ask, it is either dangerously close to solipsism or it is solipsism.  I think it is the latter and hence worthless.

In his second tweet he states that I am positing neither an actuarial or epistemological conundrum but rather a semantic one.  This is incorrect; if the problem is semantic then one need only to alter, define or find a better statement of the problem and the adverse effect would go away.  That is not the case.  My point is the limits of our knowledge, combined with simple errors, undo outside influence, prior experiences, emotional amplification etc. demonstrates that risk (again defined as an exposure to danger or something undesirable) is subjective in the social domain. It does not mean that all such effort is useless or calling something a risk is “just your opinion.”

His final comment is “I’d stop using the word ‘risk’ & be particular about describing the phenomenon as meta data around risk determinants.” I know he doesn’t like the word risk used indiscriminately because he mentioned that in a talk once stating that the Japanese didn’t have a word for it in their language until they were influenced by the West.  I do see one problem with this prescription which is, are there any risk determinants in the kind of risk under consideration to begin with?  Are there not cases of disaster, which were only obvious in retrospect and discussions of determinants useless?  What are the risk determinants for war?  The use of the term implies a factor that decides the outcome.  We don’t always know that.

Are you a mechanic?

Frequently, technical contractors will get calls from recruiters and there are only two things they are concerned with (and by extension their clients), your technical skill and your hourly rate.   And buried within this two dimensional assessment are the following assumptions:

  • You should be grateful enormous company x is offering any work in this miserable economy
  • Get ready to adjust your rate down to work with us
  • You have to prove your technical skill despite what is on your resume because you just might be liar

The trouble with this standard script is that technical ability alone is insufficient to be successful in either IAM or GRC.  Most IAM people are not mechanics turning digital wrenches.  They are more than bit flippers.  IAM requires superior social skills the so called “soft skills” in order to be successful. In every job I have worked on in the last ten years where they said “we just need a technical person who can get it done quickly” it has been a nightmare.  When I hear that now I say no thank-you.

One of the problems is that recruiters and corporate managers tend to get ahead of themselves, they just assume that if you can prove you have the skills and the money is right you will jump at the job.  However, a technical interview should not be conducted until you satisfy for yourself you can work with them.  Business is more than just tech and they might be liars.  Just because the company is public or well known doesn’t mean they have integrity.

  • Do they pay their bills on time?  If you’re an independent you are not a bank.  Never finance a billion dollar company or one with access to capital.
  • Are they pleasant and professional to work with?  You may need to ask around first or ask to talk to a contractor working there now.
  • Do they have the right budget or is this under-budgeted slam it in work.  If it is you will be the scapegoat.
  • Does the project have management support or they trying to “fly under the radar.”  Another time bomb waiting to go off.
  • Do they do what they say they are going to do?  Missed meetings, multiple re-schedules, consistently late to interviews, slow follow-up are signs of either unreliability, they don’t think much of you or the petty exercise of power.

The foregoing applies if you are not desperate for work, and desperation should not be that you are little nervous about your bills this month.  Desperation is facing foreclosure or repossession.

IAM Business Cases One Step Back

Over the years I have repeatedly heard that security people in general need to produce better business cases, better analysis (as ROI) if they wish to increase their budget.  I have tried to do just that with minimal results.  Recently, I have changed my approach and now believe that the single most important skill that security people can learn is how to pitch their ideas.   It is getting past this first step that is critical.  It’s the domain of social dynamics  and the perception management of value. What is business really about after all?

As IAM practitioners we live in the domain of first order predicate logic, of complex systems and mentally taxing analysis.  When you become an expert in any field, things that were once difficult to understand become second nature.  So when you go before those who control the budget, those who do not understand the vagaries of identity and access management as a discipline, if you come at them with cognitive fatiguing analytical business cases, it’s going to be a lot easier for them to say no (legal compulsion notwithstanding) than to go through the effort of understanding.  Now you may say to yourself, that it is the manager’s responsibility to understand these things and make rational decisions.  That is true without reservation but we all have limits.  If it’s 4:00 PM in the afternoon, and you are mentally tired is it easier to read something about a field you understand or start a course on statistical physics?  The question is not what is more interesting but was is easier (mentally).   We are all cognitive energy conservationists so to speak.

Before I proceed any further, let me be explicit about the assumptions I am making.

  • It is cognitively less taxing to make a decision based on emotion and justify it after the fact with analytical models.
  • People have a cognitive limit to what they will pay attention.
  • People won’t pay attention to things they find boring.
  • Highly technical discussions or complex topics are boring outside of a fairly small group.
  • This group rarely controls the budget.
  • Even if they do, they may be mentally taxed when you present your business case and find it easier to check-out and say no.
  • Even if they find something completely boring, they might pay attention if consequences of failing to do so are severe enough.

Since the foregoing is qualitative, it will never be proven empirically.  And if you think any of the assumptions are false feel free to comment.  If all the foregoing are true or mostly true then stands to reason that before we ever present a business case, we need to persuade first.  And this is where I have consistently fallen short.

Back in February, I began to work with a boutique investment bank focused on the middle market, and as part of getting a better understanding of that business began looking into their formal processes for winning and pitch decks.  In the process of doing the research, I stumbled upon Oren Klaff’s book Pitch Anything.  It was his book that made me realize that business cases are merely the due diligence portion of the idea you are presenting and if you can’t hold the attention of the room, and get them hooked you will never get to that point.   Since I have made the change, my success rate has greatly increased.   Before I was getting blown out 8 out of 10 times.  I have cut that in half and some of those cases I took a pass because I didn’t want to do business with the client.

Recycling the Same Security Advice

There is little innovation in the security industry and the same advice shows up ad nauseum.   Take this article at Dark Reading, What Every CFO Should Know About Security Breaches. is a recent example.

TYSONS CORNER, VA. — CEOs and CIOs have gotten religion about cybersecurity, but what about those who hold the purse strings? Experts say they need a hard lesson, too.

Three clichés in a row and we haven’t even cleared the first paragraph.  It is followed by the same kinds of advice directed, supposedly at CFO’s who we all know show up in force at security conferences to be lectured to by “experts.”

Let’s look at some of the advice our panelists hand out.  Quoting one of those Pokemon Ponemon Institute estimates with big scary numbers,

“What that says is that it pays to make the right [security] decisions from a financial perspective.”

Thank you genius, there is almost no CFO on the planet who endeavors to do that.  Next up this joke,

“In some ways, security is IT’s revenge on the finance department — they say, ‘You don’t understand what we do, so we’ll spend your money however we like,” joked Kevin Mandia, CEO of security forensics firm Mandiant.

Perfect, that should get a larger budget for next year.  Then this,

Michelle Schafer, vice president of the security practice at public relations firm Merritt Group, said companies should take the time to develop a breach response program — and rehearse various scenarios — before a compromise occurs.

Glad they brought you along Michelle.  No CFO has ever heard of an incident response team.    One more precious nugget unearthed from the finest minds,

“It’s so easy to go off spending money on security without knowing what you’re doing,” Moodispaw said. “We’ve seen companies look at firewalls and say, ‘Hey, if one is good, then we should buy five or six.’ You need to counsel your organizations to think twice about buying the latest, hottest things and focus on what works.”

What works has been known for a long time too.  If they are buying crap they don’t need to entertain the infosec team, fire the manager.  The odd thing about this article is the total absence of any quotes from the “CFO’s” who were the target of this panel.  I suppose that is because if you interviewed them you find an accounts payable analyst who went to get out of the office instead of a CFO.  I sat in the audience on a panel discussion that was aimed at CEO’s out of curiosity years ago.  Only two hands went up of the twenty or so people there when they asked who was a CEO.  One left before the panel was over, talking on his (then) status signaling Blackberry as he walked out of the room.

There appear to be only two kinds of security articles, the recycled advice like this one and, of course, the “they just don’t get it” story.  We have reached the point where we should be writing better software so a  casual user can click on any link or open any file without fear of surreptitious installation of malware.  We haven’t; we won’t because time to market and return on investment will always predominate.

“Rethinking Security Architecture” or Not

Over at Dark Reading they have a story just in time for the end of the year titled, “Rethinking IT Security Architecture: Experts Question Wisdom Of Current ‘Layered’ Cyberdefense Strategies.”  I didn’t link to it so you will know that it has nothing to say.  Instead I will just quote from the article to give you an idea of the level of thinking going on.

I really think the security industry is completely saturated at this point.  I am not saying there is no talent in the industry, it’s just that people need to differentiate themselves from everyone else’s offerings and that leads to false observations from small sample sets, coining of new phrases for old concepts and of course, fads.  It leads to people saying things like the following and pretending it’s profound or original.

 “The need to develop a robust security architecture framework has never been greater.”

However, 63 percent of organizations have no such framework in place, the study says. “For years, companies have been approaching security as a technical problem, usually by buying products to solve specific problems,” says Jose Granado, principal and practice leader for IT security services at Ernst & Young and one of the authors of the new report. “There hasn’t been much thought put to how those technologies will work together, or to the people and process sides of the equation.”

I have worked in information security for more than twenty years both inside and consulting to large organizations.  I can’t remember a single place whose security team approached “security as technical problem” and this goes back to before companies ever plugged into the internet.  More pearls of wisdom follow:

Vinnie Liu, partner and co-founder of Stach & Liu, a consulting firm that works with large enterprises on security architecture and tests companies’ defense strategies, agrees that enterprises’ historical focus on point solutions has prevented many organizations from developing a broader security strategy.

“The industry has been approaching the cybersecurity problem like the TSA has been approaching the air-security problem,” Liu says. “First the bad guys brought guns on board, so they put in metal detectors. Then somebody put a bomb in his shoe, and now we all have to take our shoes off. Then they found liquid explosives, so now we can’t bring on any liquids. It’s one problem, one solution, with no real thought to the big picture.”

The need for a broad security strategy has been well understood since Sun Tzu, right through von Clausewitz.  The reason TSA is so inept is because it is a government bureaucracy run by arrogant technocrats who are just as interested in increasing their power as they are in your safety.  They respond to political pressures and newspaper headlines.  The TSA, FEMA, EPA, FCC, FDC etc.  a murders row of bad decisions and pathetic responses.  When they screw up they ask for a bigger budget.  Everyone else loses their jobs.

Continuing with more pearls of wisdom:

“The problem is that most of these tools are still signature-based, which means you’re taking a known threat and blacklisting it. So what you’re doing is essentially layering one technology with another layer of the same type of technology,” Liu says. “It’s sort of like putting on a coat, and then putting on another coat that covers the exact same parts of your body, and then wondering why you’re still cold.”

Defense in depth means exactly that “in depth” covering all areas so if you have exposure you either have no management support or you’re incompetent.

Stach & Liu recommends that rather than buying more point technology, organizations should perform a risk assessment that identifies the most sensitive areas of the business, the most likely threats, and a holistic defense strategy — an architecture of technology and processes — designed specifically to protect the business. The risk assessment, along with the definition of the business’ specific security requirements, helps identify top priorities and most likely threats, as well as key goals — such as compliance — in order to develop a comprehensive, practical defense strategy.

At this point I am wondering who this article’s intended audience is, perhaps a someone who knows little about security or someone who thinks they do but doesn’t.

“In the old days, you didn’t change your applications all that often, so you could build a positive defense,” Pao says. “You could put email on one [router] port, Internet traffic on one router port, and have a strategy for defending them through the firewall. Today, we have mobile users, changing applications, and we can’t lock down the desktop anymore. The old ‘M&M candy’ architecture with the hard outside and the soft, chewy center no longer works. It has to be a jawbreaker now — hard all the way through.”

The reason for the hard outside and soft center had more to do with limited budgets than it did with design.  I can’t imagine that has changed.  Making decisions under scarcity is what we must do in every field.  In security you decide what you can protect with the money you have.

The most important piece of developing a security architecture is mapping (or, often, remapping) the organization’s business needs to its security requirements, experts say. Building a security architecture requires not only the buy-in of upper management, but their direct participation.

Guess what, they are too busy to talk to you.  They have lots of other problems getting their focus.  And when you have it, it’s because something went wrong and it’s rarely their fault.  It doesn’t matter if it is their fault because they are not taking the blame.  There are exceptions, of course, when they directly interfere with security but unless the press gets wind someone else will perish.

I could select other quotes from the article but what would be the point? The entire article is just standing up and knocking down straw men with no true insight or anything approaching a rethinking of security architecture.



Vindication for Nate Silver?

This article at CNet is risible.    This breathless quote at the end, “Score one for the quants, especially the most famous one of them of all, a statistician who is now, unquestionably not a one term celebrity, but a political prediction machine to be taken very, very seriously.”

Given that this was an either or decision, state by state coin tosses with weighted coin for past predilections would have been equally effective in forecasting the final result.  The aggregation of state wide poll data will be effective until it no longer is.  This is the problem with “big data” it can’t see the missing information; it suffers from the problem of induction.  When shifts occur and it makes a big miss, it will garner press and pretty soon everyone will forget the model.  They will jump on the next model and next ad nauseum.  Who was the best economic prognosticator of the late 90’s.  I guarantee he or she is not even in the top ten today.  Journalists are story tellers and most of them have no command of history.  They appear to live in an eternal present spinning theories and explanations like spiders building webs.  But unlike spiders they do not eat what they spin, that is reserved for the people who believe them.

RackSpace & Cloud Hosting

While researching hosting providers the results returned an Article 8 Reasons I hate Rackspace with the Fiery Passion of a Thousand Suns:

This article confirmed my suspicions about RackSpace based on other things I had heard.  The funny part is that Google Ad Services dropped in a banner Ad for Rackspace on an article that is rant against the company. I wonder how many click through conversions they get?

Measuring Risk Objectively?

In order to manage the complexity of life and the accompanying uncertainties, we build models.  Models by their very nature are reductions, that is, we throw out a certain amount of information.  A historian writing a history of Frankfurt, Germany is not going to concern himself with spots on the floor of the Rathaus in 1888 (unless he is a post-modern reductionist).

Risk is itself an abstraction, it is certainly not real.  Being the victim of a  specific risk, however, is real enough.  A more interesting topic is whether or not risk is objective or subjective.  How we measure matters.  It may impress to show on a slide that the mail gateway anti-virus blocked ten million attempts in the last year, but it matters little when the consequences of a single failure can end the business.

The U.S. legal scholar Cass Sunstein, who coined the term “libertarian paternalism” has commented on how small risks can become distorted in the mind of the public and amplified to the point (normally via mass media) that they influence public policy.  He uses the terms “availability cascade” (from the availability bias) and “probability neglect” to describe the basis for the entire process. The exact same thing happens in any organization where one bad experience leads to ridiculous changes in policy.  In the US think Love Canal or Times Beach.

So when we model a certain risk, it is often driven by emotion or prejudice and key elements are included/excluded.  It may take years to identify the errors.  I could be wrong but I do not think that risk can be measured objectively even with panels of experts since they are subject to the same problems as the lumpenproletariat they feel superior to, bias, group-think, emotional amplification, poor statistical reasoning, priors etc. Because of this, I agree with Paul Slovic, risk is subjective.