This iconic image of the London Underground shows a system map – a set of intersecting transport streams.
Each stream links a sequence of independent steps – in this case the individual stations. Each step is a system in itself – it has a set of inner streams.
For a system to exhibit stable and acceptable behaviour the steps must be in synergy – literally ‘together work’. The steps also need to be in synchrony – literally ‘same time’. And to do that they need to be aligned to a common purpose. In the case of a transport system the design purpose is to get from A to B safety, quickly, in comfort and at an affordable cost.
In large socioeconomic systems called ‘organisations’ the steps represent groups of people with special knowledge and skills that collectively create the desired product or service. This creates an inevitable need for ‘handoffs’ as partially completed work flows through the system along streams from one step to another. Each step contributes to the output. It is like a series of baton passes in a relay race.
This creates the requirement for a critical design ingredient: trust.
Each step needs to be able to trust the others to do their part: right-first-time and on-time. All the steps are directly or indirectly interdependent. If any one of them is ‘untrustworthy’ then the whole system will suffer to some degree. If too many generate dis-trust then the system may fail and can literally fall apart. Trust is like social glue.
So a critical part of people-system design is the development and the maintenance of trust-bonds.
And it does not happen by accident. It takes active effort. It requires design.
We are social animals. Our default behaviour is to trust. We learn distrust by experiencing repeated disappointments. We are not born cynical – we learn that behaviour.
The default behaviour for inanimate systems is disorder – and it has a fancy name – it is called ‘entropy’. There is a Law of Physics that says that ‘the average entropy of a system will increase over time‘. The critical word is ‘average’.
So, if we are not aware of this and we omit to pay attention to the hand-offs between the steps we will observe increasing disorder which leads to repeated disappointments and erosion of trust. Our natural reaction then is ‘self-protect’ which implies ‘check-and-reject’ and ‘check and correct’. This adds complexity and bureaucracy and may prevent further decline – which is good – but it comes at a cost – quite literally.
Eventually an equilibrium will be achieved where our system performance is limited by the amount of check-and-correct bureaucracy we can afford. This is called a ‘mediocrity trap’ and it is very resilient – which means resistant to change in any direction.
To escape from the mediocrity trap we need to break into the self-reinforcing check-and-reject loop and we do that by developing a design that challenges ‘trust eroding behaviour’. The strategy is to develop a skill called ‘smart trust’.
To appreciate what smart trust is we need to view trust as a spectrum: not as a yes/no option.
At one end is ‘nonspecific distrust’ – otherwise known as ‘cynical behaviour’. At the other end is ‘blind trust’ – otherwise known and ‘gullible behaviour’. Neither of these are what we need.
In the middle is the zone of smart trust that spans healthy scepticism through to healthy optimism. What we need is to maintain a balance between the two – not to eliminate them. This is because some people are ‘glass-half-empty’ types and some are ‘glass-half-full’. And both views have a value.
The action required to develop smart trust is to respectfully challenge every part of the organisation to demonstrate ‘trustworthiness’ using evidence. Rhetoric is not enough. Politicians always score very low on ‘most trusted people’ surveys.
The first phase of this smart trust development is for steps to demonstrate trustworthiness to themselves using their own evidence, and then to share this with the steps immediately upstream and downstream of them.
So what evidence is needed?
Flow comes second. If the streams do not flow smoothly then we experience turbulence and chaos which increases stress, the risk of harm and creates disappointment for everyone. Smooth flow is the result of careful flow design.
Third is Quality which means ‘setting and meeting realistic expectations‘. This cannot happen in an unsafe, chaotic system. Quality builds on Flow which builds on Safety. Quality is a design goal – an output – a purpose.
Fourth is Productivity (or profitability) and that does not automatically follow from the other three as some QI Zealots might have us believe. It is possible to have a safe, smooth, high quality design that is unaffordable. Productivity needs to be designed too. An unsafe, chaotic, low quality design is always more expensive. Always. Safe, smooth and reliable can be highly productive and profitable – if designed to be.
So whatever the driver for improvement the sequence of questions is the same for every step in the system: “How can I demonstrate evidence of trustworthiness for Safety, then Flow, then Quality and then Productivity?”
And when that happens improvement will take off like a rocket. That is the Speed of Trust. That is Improvement Science in Action.