Archive for the ‘Resilience’ Category

It is a common and intuitively reasonable assumption to believe that if something is explainable then it is predictable; and if it is not explainable then it is not predictable. Unfortunately this beguiling assumption is incorrect.  Some things are explainable but not predictable; and some others are predictable but not explainable.  Believe me? Of course not. We are all skeptics when our intuitively obvious assumptions and conclusions are challenged! We want real and rational evidence not rhetorical exhortation.

OK.  Explainable means that the principles that guide the process are conceptually simple. We can explain the parts in detail and we can explain how they are connected together in detail. Predictable implies that if we know the starting point in detail, and the intervention in detail, then we can predict what the outcome will be – in detail.


Let us consider an example. Say we know how much we have in our bank account, and we know how much we intend to spend on that new whizzo computer, then we can predict what will be left in out bank account when the payment has been processed. Yes. This is an explainable and predictable system. It is called a linear system.


Let us consider another example. Say we know we have six dice each with numbers 1 to 6 printed on them and we throw them at the same time. Can we predict where they will land and what the final sum will be? No. We can say that it will be between 6 and 36 but that is all. And after we have thrown the dice we will not be able to explain, in detail, how they came to rest exactly where they did.  This is an unpredictable and unexplainable system. It is called a random system.


This is a picture of a conceptually simple system. It is a novelty toy and it comprises two thin sheets of glass held a few millimetres apart by some curved plastic spacers. The narrow space is filled with green coloured oil, some coarse black volcanic sand, and some fine white coral sand. That is all. It is a conceptually simple toy. I have (by some magical means) layered the sand so that the coarse black sand is at the bottom and the fine white sand is on top. It is stable arrangement – and explainable. I then tipped the toy on its side – I rotated it through 90 degrees. It is a simple intervention – and explainable.

My intervention has converted a stable system to an unstable one and I confidently predict that the sand and oil will flow under the influence of gravity. There is no randomness here – I do not jiggle the toy – so the outcome should be predictable because I can explain all the parts in detail before we start;  and I can explain the process in detail; and I can explain precisely what my intervention will be. So I should be able to predict the final configuration of the sand when this simple and explainable system finally settles into a new stable state again. Yes?

Well, I cannot. I can make some educated guesses – some plausible projections. But the only way to find out precisely what will happen is by doing the experiment and observing what actually happens.

This is what happened.

The final, stable configuration of the coarse black and fine white sand has a strange beauty in the way the layers are re-arranged. The result is not random – it has structure. And with the benefit of hindsight I feel I can work backwards and understand how it might have come about. It is explainable in retrospect but I could not predict it in prospect – even with a detailed knowledge of the starting point and the process.

This is called a non-linear system. Explainable in concept but difficult to predict in practice. The weather is another example of a non-linear system – explainable in terms of the physics but not precisely predictable. How reliable are our long range weather forecasts – or the short range ones for that matter?

Non-linear systems exhibit complex and unpredictable  behaviour – even though they may be simple in concept and uncomplicated in construction.  Randomness is usually present in real systems but it is not the cause of the complex behaviour, and making our systems more complicated seems likely to result in more unpredictable behaviour – not less.

If we want the behaviour of our system to be predictable and our system has non-linear parts and relationships in it – then we are forced to accept two Universal Truths.

1. That our system behaviour will only be predictable within limits (even if there is little or no randomness in it).

2. That to keep the behaviour within acceptable limits then we need to be careful how we arrange the parts and how they relate to each other.

This challenge of creating a predictable-within-acceptable-limits system from non-linear parts is called resilient design.


We have a fourth option to consider: a system that has a predictable outcome but an unexplainable reason.

We make predictions two ways – by working out what will happen or by remembering what has happened before. The second method is much easier so it is the one we use most of the time: it is called re-cognition. We call it knowledge.

If we have a black box with inputs on one side and outputs on the other, and we observe that when we set the inputs to a specific configuration we always get the same output – then we have a predicable system. We cannot explain how the inputs result in the output because the inner workings are hidden. It could be very simple – or it could be fiendishly complicated – we do not know.

It this situation we have no choice but to accept the status quo – and we have to accept that to get a predictable outcome we have to follow the rules and just do what we have always done before. It is the creed of blind acceptance – the If you always do what you have always done you will always get what you always got. It is knowledge but it is not understanding.  New knowledge  can only be found by trial and error.  It is not wisdom, it is not design, it is not curiosity and it is not Improvement Science.


If our systems are non-linear (which they are) and we want predictable and acceptable performance (which we do) then we must strive to understand them and then to design them to be as simple as possible (which is difficult) so that we have the greatest opportunity to improve their performance by design (which is called Improvement Science).


This is a snapshot of the evolving oil-and-sand system. Look at that weird wine-glass shaped hole in the top section caused by the black sand being pulled down through the gap in the spacer then running down the slope of the middle section to fill a white sand funnel and then slip through the next hole onto the top of the white sand pyramid created by the white sand in the middle section that slipped through earlier onto the top of the sliding sand in the lowest section. Did you predict that? I suspect not. Me neither. But I can explain it – with the benefit of hindsight.

So what is it that is causing this complex behaviour? It is the spacers – the physical constraints to the flow of the sand and oil. And the same is true of systems – when the process hits a constraint then the behaviour suddenly changes and complex behaviour emerges.  And there is more to it than even this. It is the gaps between the spacers that is creating the complex behaviour. The flow from one compartment leaking into the next and influencing its behaviour, and then into the next.  This is what happens in all systems – the more constraints that are added to force the behaviour into predictable channels, and the more gaps that exist in the system of constraints then the more complex and unpredictable the system behaviour becomes. Which is exactly the opposite of the intended outcome.


The lesson that this simple toy can teach us is that if we want stable and predictable (i.e. non-complex) behaviour from our complicated systems then we must design them to operate inside the constraints so that they just never quite touch them. That requires data, information, knowledge, understanding and wise design. That is called Improvement Science.


But if, in an act of desperation, we force constraints onto the system we will make the system less stable, less predictable, less safe, less productive, less enjoyable and less affordable. That is called tampering.

The term Pragmatist is a modern one – it was coined by Charles Sanders Pierce (1839-1914) – a 19th century American polymath and iconoclast. In plain speak he was a tree-shaker and a dogma-breaker; someone who regarded rules created by people as an opportunity for innovation rather than a source of frustration.

A tree-shaker reframes the Three Fears that block change and improvement; the Fear of Ambiguity; the Fear of Ridicule and the Fear of Failure. A tree-shaker re-channels their emotional energy from fear into innovation and exploration. They feel the fear but they do it anyway. But how do they do it?

To understand this we first need to explore how we learn to collectively suppress change by submitting to peer-fear.

In the 1960’s there was an experiment done with Rhesus monkeys that sheds light on a possible mechanism: the monkeys appeared to learn from each other by observing the emotional responses of other monkeys to threats. The story of the Five Monkeys and the Banana Experiment first appeared in a management textbook in 1996  but there is no evidence that this particular experiment was ever performed. With this in mind here is a version of the story:

Five naive monkeys were offered a banana but it required climbing a ladder to get it.  Monkeys like bananas and are good at climbing. The ladder was novel. And every time any of the monkeys started to climb the ladder all the monkeys were sprayed with cold water. Monkeys do not like cold water. It was a classic conditioning experiment and after just a few iterations the monkeys stopped trying to climb the ladder to get the banana. They had learned to fear the ladder and their natural desire for the banana was suppressed by their new fear: a learned association between climbing the ladder and the unpleasant icy shower. Next the psychologists replaced one of the monkeys with a new naive monkey – who immediately started to climb the ladder to get the banana. What happened next is interesting. The other four monkeys pulled the new monkey back. They did not want to get another cold shower. After a while the new monkey learned because his fear of social rejection was greater than his desire for the banana. He stopped trying to get the banana. This cycle was repeated four more times until all the original monkeys had been replaced. None of the five remaining monkeys had any personal experience of the cold shower – but the ladder-avoiding behaviour remained and was enforced by the group, even though the original reason for shunning the ladder was unknown.

Here is the quoted reference to the experiment on which the story is based.

Stephenson, G. R. (1967). Cultural acquisition of a specific learned response among rhesus monkeys. In: Starek, D., Schneider, R., and Kuhn, H. J. (eds.), Progress in Primatology, Stuttgart: Fischer, pp. 279-288.

So it would appear that a very special type of monkey would be needed to break a culturally enforced behavioural norm. One that is curious, creative and courageous, and one that does not fear ridicule or failure. One that is immune to peer-fear.

We could extrapolate from this story and reflect on how peer pressure might impede change and improvement in the workplace.  When well-intended, innocent, creativity and innovation are met with the emotional ice-bath of dire warnings, criticism, ridicule and cynicism then the unconfident innovator may eventually give up trying and start to believe that improvement is impossible.  The Hans Christian Anderson’s short tale of the Emporer’s New Clothes is a well known example – the one innocent child says what all the experienced adults have learned to deny. A culture of peer-fear can become self-sustaining and this change-avoiding-culture appears to be a common state of affairs in many organisations; in particular ones of an academic and bureaucratic leaning.

At the other end of the change spectrum from Bureaucracy sits Chaos. It is also resisted but the behaviour is fuelled by a different fear – the Fear of Ambiguity. We prefer the known and the predictable. We follow ingrained habits. We prevaricate even when our rationality says we should change.  We dislike the feeling of ambiguity and uncertainty because it leaves us with a sense of foreboding and dread. Change is strongly associated with confusion and we appear hard-wired to avoid it. Except that we are not. This is learned behaviour and we learned it when we were very young. As adults we reinforce it; as adults we replicate it; and as adults impose it on others – including our next generation. The generation that will inherit our world and who will look after us when we are old and frail. We will reap what we sow. But if we learned it and teach it then are we able to unlearn it and unteach it?

Enter the Pragmatists. They have learned to harness the Three Fears. Or rather they have unlearned their association of Fear with Change. Sometimes this unlearning came from a crisis – they were forced to change by external factors. Doing nothing was not an option. Sometimes their unlearning came from inspiration – they saw someone else demonstrate that other options were possible and beneficial. Sometimes their insight came by surprise – an unexpected change of perspective exposed the hidden opportunity. An eureka moment.

Whatever the route the Pragmatist discovers a new tool: a tool labelled “Heuristics”.  A heuristic is a “rule of thumb” – an empirically derived good-enough-for-now guideline. Heuristics include some uncertainty, some ambiguity and some risk. Just enough uncertainty and ambiguity to build a flexible conceptual framework that is strong enough, resilient enough and modifiable enough to facilitate learning and improvement. And with it a pinch of risk to spice the sauce – because we all like a bit of risk.

The Improvement Scientist is a Pragmatist and a Practitioner of Heuristics – both of which can be learned.

Improvement Science is not just about removing the barriers that block improvement and building barriers to prevent deterioration – it is also about maintaining acceptable, stable and predictable performance.

In fact most of the time this is what we need our systems to do so that we can focus our attention on the areas for improvement rather than running around keeping all the plates spinning.  Improving the ability of a system to maintain itself is a worthwhile and necessary objective.

Long term stability cannot be achieved by assuming a stable context and creating a rigid solution because the World is always changing. Long term stability is achieved by creating resilient solutions that can adjust their behaviour, within limits, to their ever-changing context.

This self-adjusting behaviour of a system is called homeostasis.

The foundation for the concept of homeostasis was first proposed by Claude Bernard (1813-1878) who unlike most of his contemporaries, believed that all living creatures were bound by the same physical laws as inanimate matter.  In his words: “La fixité du milieu intérieur est la condition d’une vie libre et indépendante” (“The constancy of the internal environment is the condition for a free and independent life”).

The term homeostasis is attributed to Walter Bradford Cannon (1871 – 1945) who was a professor of physiology at Harvard medical school and who popularized his theories in a book called The Wisdom of the Body (1932). Cannon described four principles of homeostasis:

  1. Constancy in an open system requires mechanisms that act to maintain this constancy.
  2. Steady-state conditions require that any tendency toward change automatically meets with factors that resist change.
  3. The regulating system that determines the homeostatic state consists of a number of cooperating mechanisms acting simultaneously or successively.
  4. Homeostasis does not occur by chance, but is the result of organised self-government.

Homeostasis is therefore an emergent behaviour of a system and is the result of organised, cooperating, automatic mechanisms. We know this by another name – feedback control – which is passing data from one part of a system to guide the actions of another part. Any system that does not have homeostatic feedback loops as part of its design will be inherently unstable – especially in a changing environment.  And unstable means untrustworthy.

Take driving for example. Our vehicle and its trusting passengers want to get to their desired destination on time and in one piece. To achieve this we will need to keep our vehicle within the boundaries of the road – the white lines – in order to avoid “disappointment”.

As their trusted driver our feedback loop consists of a view of the road ahead via the front windscreen; our vision connected through a working nervous system to the muscles in ours arms and legs; to the steering wheel, accelerator and brakes; then to the engine, transmission, wheels and tyres and finally to the road underneath the wheels. It is quite a complicated multi-step feedback system – but an effective one. The road can change direction and unpredictable things can happen and we can adapt, adjust and remain in control.  An inferior feedback design would be to use only the rear-view mirror and to steer by looking at the whites lines emerging from behind us. This design is just as complicated but it is much less effective and much less safe because it is entirely reactive.  We get no early warning of what we are approaching.  So, any system that uses the output performance as the feedback loop to the input decision step is like driving with just a rear view mirror.  Complex, expensive, unstable, ineffective and unsafe.     

As the number of steps in a process increases the more important the design of  the feedback stabilisation becomes – as does the number of ways we can get it wrong:  Wrong feedback signal, or from the wrong place, or to the wrong place, or at the wrong time, or with the wrong interpretation – any of which result in the wrong decision, the wrong action and the wrong outcome. Getting it right means getting all of it right all of the time – not just some of it right some of the time. We can’t leave it to chance – we have to design it to work.

Let us consider a real example. The NHS 18-week performance requirement.

The stream map shows a simple system with two parallel streams: A and B that each has two steps 1 and 2. A typical example would be generic referral of patients for investigations and treatment to one of a number of consultants who offer that service. The two streams do the same thing so the first step of the system is to decide which way to direct new tasks – to Step A1 or to Step B1. The whole system is required to deliver completed tasks in less than 18 weeks (18/52) – irrespective of which stream we direct work into.   What feedback data do we use to decide where to direct the next referral?

The do nothing option is to just allocate work without using any feedback. We might do that randomly, alternately or by some other means that are independent of the system.  This is called a push design and is equivalent to driving with your eyes shut but relying on hope and luck for a favourable outcome. We will know when we have got it wrong – but it is too late then – we have crashed the system! 

A more plausible option is to use the waiting time for the first step as the feedback signal – streaming work to the first step with the shortest waiting time. This makes sense because the time waiting for the first step is part of the lead time for the whole stream so minimising this first wait feels reasonable – and it is – BUT only in one situation: when the first steps are the constraint steps in both streams [the constraint step is one one that defines the maximum stream flow].  If this condition is not met then we heading for trouble and the map above illustrates why. In this case Stream A is just failing the 18-week performance target but because the waiting time for Step A1 is the shorter we would continue to load more work onto the failing  stream – and literally push it over the edge. In contrast Stream B is not failing and because the waiting time for Step B1 is the longer it is not being overloaded – it may even be underloaded.  So this “plausible” feedback design can actually make the system less stable. Oops!

In our transport metaphor – this is like driving too fast at night or in fog – only being able to see what is immediately ahead – and then braking and swerving to get around corners when they “suddenly” appear and running off the road unintentionally! Dangerous and expensive.

With this new insight we might now reasonably suggest using the actual output performance to decide which way to direct new work – but this is back to driving by watching the rear-view mirror!  So what is the answer?

The solution is to design the system to use the most appropriate feedback signal to guide the streaming decision. That feedback signal needs to be forward looking, responsive and to lead to stable and equitable performance of the whole system – and it may orginate from inside the system. The diagram above holds the hint: the predicted waiting time for the second step would be a better choice.  Please note that I said the predicted waiting time – which is estimated when the task leaves Step 1 and joins the back of the queue between Step 1 and Step 2. It is not the actual time the most recent task came off the queue: that is rear-view mirror gazing again.

When driving we look as far ahead as we can, for what we are heading towards, and we combine that feedback with our present speed to predict how much time we have before we need to slow down, when to turn, in which direction, by how much, and for how long. With effective feedback we can behave proactively, avoid surprises, and eliminate sudden braking and swerving! Our passengers will have a more comfortable ride and are more likely to survive the journey! And the better we can do all that the faster we can travel in both comfort and safety – even on an unfamiliar road.  It may be less exciting but excitement is not our objective. On time delivery is our goal.

Excitement comes from anticipating improvement – maintaining what we have already improved is rewarding.  We need both to sustain us and to free us to focus on the improvement work! 

 

How do we remember the vast amount of information that we seem to be capable of?

Our brains are comprised of billions of cells most of which are actually inactive and just there to support the active brain cells – the neurons.

Suppose that the active brain cell part is 50% and our brain has a volume of about 1.2 litres or 1,200 cu.cm or 1,200,000 cu.mm. We know from looking down a microscope that each neuron is about 20/1,000 mm x 20/1,000 mm  x 20/1,000 mm which gives a volume of 8/1,000,000 cu.mm or 125,000 neurons for every cu.mm. The population of a medium sized town in a grain of salt!  This is a concept we can just about grasp. And with these two facts we estimate that there are in the order of 140,000,000,000 neurons in a human brain – 140 billion – about 20 times the population of the whole World. Wow!

But even that huge number is less than the size of the memory on the hard disc of the computer I am writing this blog on – which has 200 gigabytes which is 1,600 gigabits which is 1,600 billion bits. Ten times as many memory cells as there are neurons in a human brain. 

But our brains are not just for storing data – they do all the data processing too – it is an integrated processor-and-memory design completely unlike the separate processor-or-memory design of a digital computer.  Each of our brains is remarkable in its capability, adaptability, and agility – its ability to cope with change – its ability to learn and to change its behaviour while still working.  So how does our biological memory work?

Well not like a digital computer where the zeros and ones, the binary digits (bits) are stored in regular structure of memory cells – a static structural memory – a data prison.  Our biological memory works in a completely different way – it is a temporal memory – it is time dependent. Our memories are not “recalled” like getting a book out of an indexed slot on a numbered in a massive library; are memories are replayed like a recording or rebuilt from a recipe. Time is the critical factor and this concept of temporal memory is a feature of all systems.

And that is not all – the temporal memory is not a library of video tapes – it is the simultaneous collective action of many parts of the system that create the illusion of the temporal memory – we have a parallel-distributed-temporal-memory. More like a video hologram. And it means we cannot point to the “memory” part of our brains – it is distributed throughout the system – and this means that the connections between the parts are as critical a part of the design and the parts themselves. It is a tricky concept to grasp and none of the billions of digital computers that co-inhabit this planet operate this way. They are feeble and fragile in comparison. An inferior design.

The terms distributed-temporal or systemic-memory are a bit cumbersome though so we need a new label – let us call it a systemory.  The properties of a systemory are remarkable – for example it still works when a bit of the systemory is removed.  When a bit of your brain is removed you don’t “forget” a bit of your name or lose the left ear on the mental picture of your friends face – as would happen with a computer.  A systemory is resilient to damage which is a necessary design-for-survival. It also implies that we can build our systemory with imperfect parts and incomplete connections. In a digital computer this would not work: the localised-static or silo-memory has to be perfect because if a single bit gets flipped or a single wire gets fractured it can render the whole computer inoperative useless junk.

Another design-for-survival property of a systemory is that it still works even when it is being changed – it is continuously adaptable and updateable.  Not so a computer – to change the operating system the computer has to be stopped, the old program overwritten by the new one, then the new one started. In fact computers are designed to prevent programs modifying themselves – because it a sure recipe for a critical system failure – the dreaded blue screen!

So if we map our systemory concept across from person to population and we replace neurons with people then we get an inkling of how a society can have a collective memory, a collective intelligence, a collective consciousness even – a social systemory. We might call that property the culture.  We can also see that the relationships that link the people are as critical as the people themselves and that both can be imperfect yet we get stable and reliable behaviour. We can also see that influencing the relationships between people has as much effect on the system behaviour as how the people themselves perform – because the properties of the systemory are emergent. Culture is an output not an input.

So in the World – the development of global communication systems means that all 7 billion people in the global social systemory can, in principle, connect to each other and can collectively learn and change faster and faster as the technology to connect more widely and more quickly develops. The rate of culture change is no longer governed by physical constraints such as geographic location, orand temporal constraints such as how long a letter takes to be delivered.

Perhaps the most challenging implication is that a systemory does not have a “point of control” – there is no librarian who acts as a gatekeeper to the data bank, no guard on the data prison.  The concept of “control” in a systemory is different – it is global not local – and it is influence not control.  The rapid development of mobile communication technology and social networking gives ample evidence – we would now rather communicate with a familar on the other side of the world than with a stranger standing next to us in the lunch queue. We have become tweeting and texting daemons.  Our emotional relationships are more important than our geographical ones. And if enough people can connect to each other they can act in a collective, coordinated, adaptive and agile way that no command-and-control system can either command or control. The recent events in the Middle East are ample evidence of the emergent effectiveness of a social systemory.

Our insight exposes a weakness of a social systemory – it is possible to adversely affect the whole by introducing a behavioural toxin that acts at the social connection level – on the relationships between people. The behavioural toxin needs only to have a weak and apparently harmless effect but when disseminated globally the cumulative effect creates cultural dysfunction.  It is rather like the effect of alcohol and other recreational chemical substances on the brain – it cause a temporary systemory dysfunction – but one that in an over-stressed psychological system paradoxically results in pleasure; or rather stress release. Hence the self-reinforcing nature of the addiction.  

Effective leaders are intuitively aware that just their behaviour can be a tonic or a toxin for the whole system: organisations are the the same emotional boat as their leader.

Effective leaders use their behaviour to steer the systemory of the organisation along a path of improvement and their behaviour is the output of their personal systemory.

Leaders have to be the change that they want their organisations to achieve.