The Failure Paradox

Human beings do not like to fail, but we learn more from failure than we learn from success.

Why do we need to fail?

We want to fail so that we can move from robustness to resilience. What that means is that we can detect failure early, recover from it quickly and exploit our learning. Early Detection, Fast Recovery, Early Exploitation.

Yes, but why do we need to fail?

Organisations are often intolerant of failure. In fact I’ve worked at a large organisation where I was actively banned from using the word “fail”. In those environments there is no distinction between things which are “safe to fail”, and those which are “unsafe to fail”. The problem is that organisations get so worried about failing on the big unsafe stuff, that they become resistant to failure even on the small safe to fail stuff. Add that to the failure paradox and there is a lot of incentive not to fail.

Yes, but why do we need to fail?

If everything must succeed every time, then we do 2 things:

  • we lower the bar on what success is – we end up aiming to “avoid failure” instead of aiming for excellence;

  • we stop taking risks – the associated cost of failure means we play it safe every time;

    • we look to find a way to avoid failure completely.

Talking about success

But surely we can shout about our success to encourage other people to work towards success too. That can’t harm anything, can it? If we only talk about success then:

  • we discourage people taking risks by working on ideas which may fail;

  • we would be encouraged by the system to hide things which are failing;

    • individuals don’t want to be associated with failure;

  • we perpetuate the myth that success can be guaranteed if we only have the right people (experts) working in the right way (process);

    • extrinsic factors often mean success or failure is not in our own gift, so we need to be lucky too;

  • we sound less credible to our audience;

  • failure becomes a corporate disease that we need to distance ourselves from.

If we become risk averse then:

  • we miss the opportunity to back the “1000:1 outsider” idea which might just make the biggest difference to our users;

  • we lower the bar so that we always avoid failure;

  • we aim to pass audits, gates, assessments and other review processes, rather than aiming for excellence and use those checks as a learning opportunity to get there.

If we really want to succeed on everything, then we need to change our goals. We can only succeed in large meaningful ways for our users if we are prepared to take frequent small risks, and learn from them, allowing us to get more and more successful by building on the learning we got by failing.

Safe to fail

I’m a big fan of theCynefin sense making framework for complexity which talks about Probe-Sense-Respond as the to take approach in the complex domain. An easy way to know if you’re in the complex domain is to ask “who has done this before?” If the answer is “no-one,” or “no-one here,” then you’re in the complex zone.

The problem is that analysis techniques cannot be used successfully to solve complex problems, we need to actually do some work on the problem to learn about it, the we can analyse the outcomes. We should design a “probe” – a safe to fail experiment, run it, see what happens, and if it works well to reduce the problem, amplify it; if it doesn’t work, dampen it down. We should also plan what the amplification and dampening methods are before we commit to the experiment.

If it’s safe to fail, then we can happily take a risk on it, and if it fails, then we have spent some money to buy some knowledge, and now we understand the problem better and can run another probe or even move into analysis if that is appropriate.

If we make our safe to fail experiments small, we can do lots of them, and many of them will fail. Let me remind you of the central paradox of failure. Human beings do not like to fail, but we learn more from failure than from success.

So this is uncomfortable for people, how can we encourage them to try things which may fail, or even worse will probably fail? Our default position is to punish failure either directly with performance reviews or indirectly by promoting success. If we want to get those big gains then we need to make it okay to fail. In fact if we want to get the big gains (Early Detection, Fast Recovery, Early Exploitation) we need to actively encourage and promote failure.

Where should we fail?

Failure everywhere for no purpose would be a dumb approach. If we apply one of the late Stephen R Covey’s 7 habits – “begin with the end in mind” to the failure paradox, then we need to think of it backwards. “We need to buy some more knowledge in this area, so let’s find a way to make it okay to go through the human discomfort in order to buy that knowledge.”

Many examples are out there. Spotify founder Daniel Ek: “We aim to make mistakes faster than anyone else.” He doesn’t mean on everything, only on the high complexity safe to fail stuff that leads to the learning they need in order to move quickly in a highly competitive, highly complex market.

So let’s go out there and celebrate failure – as long as we do it on purpose. Let’s not get fixated on success, and make people scared to fail. Let’s not fear failure; let’s plan it so we can do it properly. Probe-Sense-Respond…