Business risk is usually an exercise in corporate power. But your own business assumptions are about to become a risk in their own right.

A few months ago, I wrote about risk – suggesting that the ‘discipline’ of risk management tended to focus on risks which were (a) understood and (b) for which probabilities could be estimated, and that this led to far too narrow a view of risk. In particular, this meant that companies were usually very poor at assessing their blindspots. This second post has taken longer to write than I expected, but in this one I’m going to look at the factors which mean that companies tend to remain blinded by their blindspots.

In the first post, I used a couple of 2×2 models: one by Gary Kass, adapted fromthe work of Andy Stirling, which assessed risk by ‘knowledge of outcome’ and ‘knowledge of probability’, and a second one from Sohail Inayatullah, which explored what we know, and what we don’t know. I’ve put these in here as thumbnails by way of reference (click on them to make them larger).

But this leaves an important problem which is insufficiently addressed in organisational research or futures work: an organisation may become aware of an “unknown unknown” (thereby moving it to a “known” space) but precisely because such an insight is alien to their culture and worldview, they are unable to act on it. Either it ‘does not compute’, or it is understood well enough but is inconvenient. Organisations are not rational decision-makers, any more than individuals are.

And frankly, the evidence is depressing. It’s worth spelling out some examples. There’s a compelling account in Flirting With Disaster on the pre-launch discussion about the O-rings whose failure doomed the Challenger. This was a “known unknown”, an engineering problem whose outcomes were known but whose probability was disputed. It was a “complicated” problem rather than a “complex” one, to borrow some language from Dave Snowden’s Cynefin model, which is a useful tool to think about these issues (although this is a discussion for another day).

Recipes for failure

There was political pressure for the launch to proceed, largely from within NASA, which was worried that it was losing the battle to fund the space programme But there were internal voices who argued strongly that the O-ring seals would crack or malfunction at low temperatures, but they were, effectively, bullied into silence.

In the still incomplete story of the Deepwater Horizon disaster, we have already seen similar claims. BP, it is alleged, put pressure on its contractors to take some shortcuts because drilling was behind schedule and every day of delay was costing it another $750,000. The combination of financial pressures and technical over-confidence is a sure recipe for failure. We saw the same combination at work in the banking crisis, with people who pointed out risks being marginalised, victimised, or fired. As James Kwak wrote,

The problem is that there is a systematic bias within these companies against certain assessments and in favor of others. That is, the guy who shouts, “Danger! Danger!” will be ignored (or fired), and the guy who says, “Everything’s fine, the model says disaster can strike only once every hundred million years” will get the promotion — because the people in charge make more money listening to the latter guy. This is why banks don’t accidentally hold too much capital. It’s why oil companies don’t accidentally take too many safety precautions. The mistakes only go one way.

Managing risk by increasing resilience

In a co-written article (downloads pdf) in the Harvard Business Review, the ‘Black Swan’ author Naseem Taleb argues that organisations need to stop trying to assess probabilities of risk and disastrous events, and instead reduce their vulnerability to them: in other words, to increase their resilience. As they wrote, “Risk management, we believe, should be about lessening the impact of events we don’t understand”.

This is also the thesis of a recent paper by Bernard Lietaer, Robert Ulanowicz and others on the future of the financial system. Drawing on work on ecological systems, it argues that stable systems are on a point of balance between resilience and efficiency.  Too much resilence, and you stagnate; but too much efficiency, and the system becomes brittle. It’s also worth noting from the article that the point of balance is twice as far from the ‘efficiency’ end as it is from the ‘resilience’ end; but most modern corporations are the other way around. And, they argue, there’s a relatively small ‘window of viability’ either side of the point of balance.

But this requires systemic change, because organisations will only behave like this if they are incentivised to do so.  Some people argue that this is down to the Board. But the recurring sound of the large business in the early 21st century is still Chuck Prince’s notorious remark to the Financial Times, “As long as the music is playing, you’ve got to get up and dance.” And don’t even notice that you’re on the edge of a cliff.

‘A “Black Swan” isn’t an alibi for stupidity

Certainly during the long globalising boom, and so far since the financial collapse as well, boards have mostly been engaged in supporting short-term maximising behaviour and a certain amount of agent-type rent-seeking (for example the cosy “remuneration clubs”that are a feature of modern corporate life). And on the subject of notorious remarks, BP’s Tony Hayward’s description of Deepwater Horizon oil disaster as a ‘Black Swan’ was beyond irony. But there is a deeper point here: that one of the ways in which Taleb’s Black Swan idea has been used is as cover for a lack of foresight. As Alex Pang once noted, “The idea of a Black Swan shouldn’t be an alibi for stupity”. But as Pang observes, “The ‘nobody could have predicted’ defense … insulates leaders and experts from accountability for their failure”.

And the nature of calamitous accidents allows for such myopia. As Marc Gerstein writes of the ‘friendly fire’ which destroyed two Black Hawk helicopters over northern Iraq:

Latent conditions in technology and organisation can exist for a significant time, waiting for circumstances to line up that create a calamitous accident. … A systemic perspective is essential to shed light on the complex interplay of actions and how the loss might have been prevented.

Pricing blindspots

This is the significance of the $20 billion Gulf compensation scheme, and the subsequent government lawsuit. Unlike previous legislation, which capped liabilities arising from pollution, a plausible price has been attached to the “externalities” arising from BP’s exploration disaster. It’s also been made clear that the fund doesn’t represent a ceiling to potential liabilities. For the first time, blindspots are seriously expensive.

At that price, it’s large enough for boards and managers to worry about it. Some of the discussion about the financial sector, though nowhere near enough, has been about how to ensure that financial institutions can cover the costs of their failures, rather than leaving the state to step in. Capitalism has to have disincentives as well as incentives; otherwise price signals are incomplete, information one-sided and markets are not effective at allocating resources: so what does a Compensation Fund targeted at Wall Street or the City of London look like?

As systems become more complex, both risks and consequences of failure become more acute. The BP Fund creates both a precedent and a principle, and it is this. Corporate risk assessments which can’t tell you what your current business assumptions ignore or preclude, and what you’d have to do to to effect a recovery, are about to become a business risk in their own right.