Archive for the ‘6M Design’ Category

It had been some time since Bob and Leslie had chatted so an email from the blue was a welcome distraction from a complex data analysis task.

<Bob> Hi Leslie, great to hear from you. I was beginning to think you had lost interest in health care improvement-by-design.

<Leslie> Hi Bob, not at all.  Rather the opposite.  I’ve been very busy using everything that I’ve learned so far.  It’s applications are endless, but I have hit a problem that I have been unable to solve, and it is driving me nuts!

<Bob> OK. That sounds encouraging and interesting.  Would you be able to outline this thorny problem and I will help if I can.

<Leslie> Thanks Bob.  It relates to a big issue that my organisation is stuck with – managing urgent admissions.  The problem is that very often there is no bed available, but there is no predictability to that.  It feels like a lottery; a quality and safety lottery.  The clinicians are clamoring for “more beds” but the commissioners are saying “there is no more money“.  So the focus has turned to reducing length of stay.

<Bob> OK.  A focus on length of stay sounds reasonable.  Reducing that can free up enough beds to provide the necessary space-capacity resilience to dramatically improve the service quality.  So long as you don’t then close all the “empty” beds to save money, or fall into the trap of believing that 85% average bed occupancy is the “optimum”.

<Leslie> Yes, I know.  We have explored all of these topics before.  That is not the problem.

<Bob> OK. What is the problem?

<Leslie> The problem is demonstrating objectively that the length-of-stay reduction experiments are having a beneficial impact.  The data seems to say they they are, and the senior managers are trumpeting the success, but the people on the ground say they are not. We have hit a stalemate.


<Bob> Ah ha!  That old chestnut.  So, can I first ask what happens to the patients who cannot get a bed urgently?

<Leslie> Good question.  We have mapped and measured that.  What happens is the most urgent admission failures spill over to commercial service providers, who charge a fee-per-case and we have no choice but to pay it.  The Director of Finance is going mental!  The less urgent admission failures just wait on queue-in-the-community until a bed becomes available.  They are the ones who are complaining the most, so the Director of Governance is also going mental.  The Director of Operations is caught in the cross-fire and the Chief Executive and Chair are doing their best to calm frayed tempers and to referee the increasingly toxic arguments.

<Bob> OK.  I can see why a “Reduce Length of Stay Initiative” would tick everyone’s Nice If box.  So, the data analysts are saying “the length of stay has come down since the Initiative was launched” but the teams on the ground are saying “it feels the same to us … the beds are still full and we still cannot admit patients“.

<Leslie> Yes, that is exactly it.  And everyone has come to the conclusion that demand must have increased so it is pointless to attempt to reduce length of stay because when we do that it just sucks in more work.  They are feeling increasingly helpless and hopeless.

<Bob> OK.  Well, the “chronic backlog of unmet need” issue is certainly possible, but your data will show if admissions have gone up.

<Leslie> I know, and as far as I can see they have not.

<Bob> OK.  So I’m guessing that the next explanation is that “the data is wonky“.

<Leslie> Yup.  Spot on.  So, to counter that the Information Department has embarked on a massive push on data collection and quality control and they are adamant that the data is complete and clean.

<Bob> OK.  So what is your diagnosis?

<Leslie> I don’t have one, that’s why I emailed you.  I’m stuck.


<Bob> OK.  We need a diagnosis, and that means we need to take a “history” and “examine” the process.  Can you tell me the outline of the RLoS Initiative.

<Leslie> We knew that we would need a baseline to measure from so we got the historical admission and discharge data and plotted a Diagnostic Vitals Chart®.  I have learned something from my HCSE training!  Then we planned the implementation of a visual feedback tool that would show ward staff which patients were delayed so that they could focus on “unblocking” the bottlenecks.  We then planned to measure the impact of the intervention for three months, and then we planned to compare the average length of stay before and after the RLoS Intervention with a big enough data set to give us an accurate estimate of the averages.  The data showed a very obvious improvement, a highly statistically significant one.

<Bob> OK.  It sounds like you have avoided the usual trap of just relying on subjective feedback, and now have a different problem because your objective and subjective feedback are in disagreement.

<Leslie> Yes.  And I have to say, getting stuck like this has rather dented my confidence.

<Bob> Fear not Leslie.  I said this is an “old chestnut” and I can say with 100% confidence that you already have what you need in your T4 kit bag?

<Leslie>Tee-Four?

<Bob> Sorry, a new abbreviation. It stands for “theory, techniques, tools and training“.

<Leslie> Phew!  That is very reassuring to hear, but it does not tell me what to do next.

<Bob> You are an engineer now Leslie, so you need to don the hard-hat of Improvement-by-Design.  Start with your Needs Analysis.


<Leslie> OK.  I need a trustworthy tool that will tell me if the planned intervention has has a significant impact on length of stay, for better or worse or not at all.  And I need it to tell me that quickly so I can decide what to do next.

<Bob> Good.  Now list all the things that you currently have that you feel you can trust.

<Leslie> I do actually trust that the Information team collect, store, verify and clean the raw data – they are really passionate about it.  And I do trust that the front line teams are giving accurate subjective feedback – I work with them and they are just as passionate.  And I do trust the systems engineering “T4” kit bag – it has proven itself again-and-again.

<Bob> Good, and I say that because you have everything you need to solve this, and it sounds like the data analysis part of the process is a good place to focus.

<Leslie> That was my conclusion too.  And I have looked at the process, and I can’t see a flaw. It is driving me nuts!

<Bob> OK.  Let us take a different tack.  Have you thought about designing the tool you need from scratch?

<Leslie> No. I’ve been using the ones I already have, and assume that I must be using them incorrectly, but I can’t see where I’m going wrong.

<Bob> Ah!  Then, I think it would be a good idea to run each of your tools through a verification test and check that they are fit-4-purpose in this specific context.

<Leslie> OK. That sounds like something I haven’t covered before.

<Bob> I know.  Designing verification test-rigs is part of the Level 2 training.  I think you have demonstrated that you are ready to take the next step up the HCSE learning curve.

<Leslie> Do you mean I can learn how to design and build my own tools?  Special tools for specific tasks?

<Bob> Yup.  All the techniques and tools that you are using now had to be specified, designed, built, verified, and validated. That is why you can trust them to be fit-4-purpose.

<Leslie> Wooohooo! I knew it was a good idea to give you a call.  Let’s get started.


[Postscript] And Leslie, together with the other stakeholders, went on to design the tool that they needed and to use the available data to dissolve the stalemate.  And once everyone was on the same page again they were able to work collaboratively to resolve the flow problems, and to improve the safety, flow, quality and affordability of their service.  Oh, and to know for sure that they had improved it.

One of the quickest and easiest ways to kill an improvement initiative stone dead is to label it as a “cost improvement program” or C.I.P.

Everyone knows that the biggest single contributor to cost is salaries.

So cost reduction means head count reduction which mean people lose their jobs and their livelihood.

Who is going to sign up to that?

It would be like turkeys voting for Xmas.

There must be a better approach?

Yes. There is.


Over the last few weeks, groups of curious skeptics have experienced the immediate impact of systems engineering theory, techniques and tools in a health care context.

They experienced queues, delays and chaos evaporate in front of their eyes … and it cost nothing to achieve. No extra resources. No extra capacity. No extra cash.

Their reaction was “surprise and delight”.

But … it also exposed a problem.  An undiscussable problem.


Queues and chaos require expensive resources to manage.

We call them triagers, progress-chasers, and fire-fighters.  And when the queues and chaos evaporate then their jobs do too.

The problem is that the very people who are needed to make the change happen are the ones who become surplus-to-requirement as a result of the change.

So change does not happen.

It would like turkeys voting for Xmas.


The way around this impasse is to anticipate the effect and to proactively plan to re-invest the resource that is released.  And to re-invest it doing a more interesting and more worthwhile jobs than queue-and-chaos management.

One opportunity for re-investment is called time-buffering which is an effective way to improve resilience to variation, especially in an unscheduled care context.

Another opportunity for re-investment is tail-gunning the chronic backlogs until they are down to a safe and sensible size.

And many complain that they do not have time to learn about improvement because they are too busy managing the current chaos.

So, another opportunity for re-investment is training – oneself first and then others.


R.I.P.    C.I.P.

The NHS appears to be descending in a frenzy of fear as the winter looms and everyone says it will be worse than last and the one before that.

And with that we-are-going-to-fail mindset, it almost certainly will.

Athletes do not start a race believing that they are doomed to fail … they hold a belief that they can win the race and that they will learn and improve even if they do not. It is a win-win mindset.

But to succeed in sport requires more than just a positive attitude.

It also requires skills, training, practice and experience.

The same is true in healthcare improvement.


That is not the barrier though … the barrier is disbelief.

And that comes from not having experienced what it is like to take a system that is failing and transform it into one that is succeeding.

Logically, rationally, enjoyably and surprisingly quickly.

And, the widespread disbelief that it is possible is paradoxical because there are plenty of examples where others have done exactly that.

The disbelief seems to be “I do not believe that will work in my world and in my hands!

And the only way to dismantle that barrier-of-disbelief is … by doing it.


How do we do that?

The emotionally safest way is in a context that is carefully designed to enable us to surface the unconscious assumptions that are the bricks in our individual Barriers of Disbelief.

And to discard the ones that do not pass a Reality Check, and keep the ones that are OK.

This Disbelief-Busting design has been proven to be effective, as evidenced by the growing number of individuals who are learning how to do it themselves, and how to inspire, teach and coach others to as well.


So, if you would like to flip disbelief-and-hopeless into belief-and-hope … then the door is here.

It is always rewarding when separate but related ideas come together and go “click”.

And this week I had one of those “ah ha” moments while attempting to explain how the process of engagement works.

Many years ago I was introduced to the conscious-competence model of learning which I found really insightful.  Sometime later I renamed it as the awareness-ability model because the term competence felt too judgmental.

The idea is that when we learn we all start from a position of being unaware of our inability.

A state called blissful ignorance.

And it is only when we try to do something that we become aware of what we cannot do; which can lead to temper tantrums!

As we concentrate and practice our ability improves and we enter the zone of know how.  We become able to demonstrate what we can do, and explain how we are doing it.

The final phase comes when it becomes so habitual that we forget how we learned our skill – it has become second nature.


Some years later I was introduced to the Nerve Curve which is the emotional roller-coaster ride that accompanies change.  Any form of change.

A five-step model was described in the context of bereavement by psychiatrist Elisabeth Kübler-Ross in her 1969 book “On Death & Dying: What the Dying Have to Teach Doctors, Nurses, Clergy and their Families.

More recently this has been extended and applied by authors such as William Bridges and John Fisher in the less emotionally traumatic contexts called transitions.

The characteristic sequence of emotions are triggered by external events are:

  • shock
  • denial
  • frustration
  • blame
  • guilt
  • depression
  • acceptance
  • engagement
  • excitement.

The important messages in both of these models is that we can get stuck along the path of transition, and we can disengage at several points, signalling to others that we have come off the track.  When we do that we exhibit behaviours such as denial, disillusionment and hostility.


More recently I was introduced to the work of the late Chris Argyris and specifically the concept of “defensive reasoning“.

The essence of the concept:  As we start to become aware of a gap between our intentions and our impact, then we feel threatened and our natural reaction is defensive.  This is the essence of the behaviour called “resistance to change”, and it is interesting to note that “smart” people are particularly adept at it.


These three concepts are clearly related in some way … but how?


As a systems engineer I am used to cyclical processes and the concepts of wavelength, amplitude, phase and offset, and I found myself looking at the Awareness-Ability cycle and asking:

“How could that cycle generate the characteristic shape of the transition curve?”

Then the Argyris idea of the gap between intent and impact popped up and triggered another question:

“What if we look at the gap between our ability and our awareness?”

So, I conducted a thought experiment and imagined myself going around the cycle – and charting my ability, awareness and emotional state along the way … and this sketch emerged. Ah ha!

When my awareness exceeded my ability I felt disheartened. That is the defensive reasoning that Chris Argyris talks about, the emotional barrier to self-improvement.


Ability – Awareness = Engagement


This suggested to me that the process of building self-engagement requires opening the ability-versus-awareness gap a little-bit-at-a-time, sensing the emotional discomfort, and then actively releasing the tension by learning a new concept, principle, technique or tool (and usually all four).

Eureka!

I wonder if the same strategy would work elsewhere?

The first step in a design conversation is to understand the needs of the customer.

It does not matter if you are designing a new kitchen, bathroom, garden, house, widget, process, or system.  It is called a “needs analysis”.

Notice that it is not called a “wants analysis”.  They are not the same thing because there is often a gap between what we want (and do not want) and what we need (and do not need).

The same is true when we are looking to use a design-based approach to improve something that we already have.


This is especially true when we are improving services because the the needs and wants of a service tend to drift and shift continuously, and we are in a continual state of improvement.

For design to work the “customers” and the “suppliers” need work collaboratively to ensure that they both get what they need.

Frustration and fragmentation are the symptoms of a combative approach where a “win” for one is a “lose” for the other (NB. In absolute terms both will end up worse off than they started so both lose in the long term.)


And there is a tried and tested process to collaborative improvement-by-design.

One version is called “experience based co-design” (EBCD) and it was cooked up in a health care context about 20 years ago and shown to work in a few small pilot studies.

The “experience” that triggered the projects was almost always a negative one and was associated with feelings of frustration, anxiety and disappointment. So, the EBCD case studies were more focused on helping the protagonists to share their perspectives, in the belief that will be enough to solve the problem.  And it is indeed a big step forwards.

It has a limitation though.  It assumes that the staff and patients know how to design processes so that they are fit-4-purpose, and the evidence to support that assumption is scanty.

In one pilot in mental health, the initial improvement (a fall in patient and carer complaints) was not sustained.  The reason given was that the staff who were involved in the pilot inevitably moved on, and as they did the old attitudes, beliefs and behaviours returned.


So, an improved version of EBCD is needed.  One that is based on hard evidence of what works and what does not.  One that is also focused on moving towards a future-purpose rather than just moving away from past-problems.

Let us call this improved version “Evidence-Based Co-Design“.

And we already know that by a different name:

Health Care Systems Engineering (HCSE).

“Those who cannot remember the past are condemned to repeat it”.

Aphorism by George Santayana, philosopher (1863-1952).

And the history of quality improvement (QI) is worth reflecting on, because there is massive pressure to grow QI capability in health care as a way of solving some chronic problems.

The chart below is a Google Ngram, it was generated using some phrases from the history of Quality Improvement:

TQM = the total quality management movement that grew from the work of Walter Shewhart in the 1920’s and 30’s and was “incubated” in Japan after being transplanted there by Shewhart’s student W. Edwards Deming in the 1950’s.
ISO 9001 = an international quality standard first published in 2000 that developed from the British Standards Institute (BSI) in the 1970’s via ISO 9000 that was first published in 1987.
Six Sigma = a highly statistical quality improvement / variation reduction methodology that originated in the rapidly expanding semiconductor industry in the 1980’s.

The rise-and-fall pattern is characteristic of how innovations spread; there is a long lag phase, then a short accelerating growth phase, then a variable plateau phase and then a long, decelerating decline phase.

It is called a life-cycle. It is how complex adaptive systems behave. It is how innovations spread. It is expected.

So what happened?

Did the rise of TQM lead to the rise of ISO 9000 which triggered the development of the Six Sigma methodology?

It certainly looks that way.

So why is Six Sigma “dying”?  Or is it just being replaced by something else?


This is the corresponding Ngram for “Healthcare Quality Improvement” which seems to sit on the timeline in about the same place as ISO 9001 and that suggests that it was triggered by the TQM movement. 

The Institute of Healthcare Improvement (IHI) was officially founded in 1991 by Dr Don Berwick, some years after he attended one of the Deming 4-day workshops and had an “epiphany”.

Don describes his personal experience in a recent plenary lecture (from time 01:07).  The whole lecture is worth watching because it describes the core concepts and principles that underpin QI.


So given the fact that safety and quality are still very big issues in health care – why does the Ngram above suggest that the use of the term Quality Improvement does not sustain?

Will that happen in healthcare too?

Could it be that there is more to improvement than just a focus on safety (reducing avoidable harm) and quality (improving patient experience)?

Could it be that flow and productivity are also important?

The growing angst that permeates the NHS appears to be more focused on budgets and waiting-time targets (4 hrs in A&E, 63 days for cancer, 18 weeks for scheduled care, etc.).

Mortality and Quality hardly get a mention any more, and the nationally failed waiting time targets are being quietly dropped.

Is it too politically embarrassing?

Has the NHS given up because it firmly believes that pumping in even more money is the only solution, and there isn’t any more in the tax pot?


This week another small band of brave innovators experienced, first-hand, the application of health care systems engineering (HCSE) to a very common safety, flow, quality and productivity problem …

… a chronically chaotic clinic characterized by queues and constant calls for more capacity and cash.

They discovered that the queues, delays and chaos (i.e. a low quality experience) were not caused by lack of resources; they were caused by flow design.  They were iatrogenic.  And when they applied the well-known concepts and principles of scheduling design, they saw the queues and chaos evaporate, and they measured a productivity increase of over 60%.

OMG!

Improvement science is more than just about safety and quality, it is about flow and productivity as well; because we all need all four to improve at the same time.

And yes we need all the elements of Deming’s System of Profound Knowledge (SoPK), but need more than that.  We need to harness the knowledge of the engineers who for centuries have designed and built buildings, bridges, canals, steam engines, factories, generators, telephones, automobiles, aeroplanes, computers, rockets, satellites, space-ships and so on.

We need to revisit the legacy of the engineers like Watt, Brunel, Taylor, Gantt, Erlang, Ford, Forrester and many, many others.

Because it does appear to be possible to improve-by-design as well as to improve-by-desire.

Here is the Ngram with “Systems Engineering” (SE) added and the time line extended back to 1955.  Note the rise of SE in the 1950’s and 1960’s and note that it has sustained.

That pattern of adoption only happens when something is proven to be fit-4-purpose, and is valued and is respected and is promoted and is taught.

What opportunity does systems engineering offer health care?

That question is being actively explored … here.

This week a ground-breaking case study was published.

It describes how a team in South Wales discovered how to make the flows visible in a critical part of their cancer pathway.

Radiology.

And they did that by unintentionally falling into a trap!  A trap that many who set out to improve health care services fall into.  But they did not give up.  They sought guidance and learned some profound lessons.

Part 1 of their story is shared here.


One lesson they learned is that, as they take on more complex improvement challenges, they need to be equipped with the right tools, and they need to be trained to use them, and they need to have practiced using them.

Another lesson they learned is that making the flows in a system visible is necessary before the current behaviour of the system can be understood.

And they learned that they needed a clear diagnosis of how the current system is not performing; before they can attempt to design an intervention to deliver the intended improvement.

They learned how the Study-Plan-Do cycle works, and they learned the reason it starts with “Study”, and not with “Plan”.


They tried, failed, took one step back, asked, listened and learned.


Then with their new knowledge, more advanced tools, and deeper understanding they took two steps forward; diagnosed problem, designed an intervention, and delivered a significant improvement.

And visualised just how significant.

Then they shared Part 2 of their story … here.

 

 

Beliefs drive behaviour. Behaviour drives change. Improvement requires change.

So, improvement requires challenging beliefs; confirming some and disproving others.

And beliefs can only be confirmed or disproved rationally – with evidence and explanation. Rhetoric is too slippery. We can convince ourselves of anything with that!

So it comes as an emotional shock when one of our beliefs is disproved by experiencing reality from a new perspective.

Our natural reaction is surprise, perhaps delight, and then defense. We say “Yes, but ...”.

And that is healthy skepticism and it is a valuable and necessary part of the change and improvement process.

If there are not enough healthy skeptics on a design team it is unbalanced.

If there are too many healthy skeptics on a design team it is unbalanced.


This week I experienced this phenomenon first hand.

The context was a one day practical skills workshop and the topic was:

How to improve the safety, timeliness, quality and affordability of unscheduled care“.

The workshop is designed to approach this challenge from a different perspective.

Instead of asking “What is the problem and how do we solve it?” we took the system engineering approach of asking “What is the purpose and how can we achieve it?”

We used a range of practical exercises to illustrate some core concepts and principles – reality was our teacher. Then we applied those newly acquired insights to the design challenge using a proven methodology that ensured we do not skip steps.


And the outcome was: the participants discovered that …

it is indeed possible to improve the safety, timeliness, quality and affordability of unscheduled health care …

using health care systems engineering concepts, principles, techniques and tools that, until the workshop, they had been unaware even existed.


Their reaction was “OMG” and was shortly followed by “Yes, but …” which is to be expected and is healthy.

The rest of the “Yes, but … ” sentence was “… how will I convince my colleagues?

One way is for them to seek out the same experience …

… because reality is a much better teacher than rhetoric.

HCSE Practical Skills One Day Workshops

 

One of the most effective ways to inspire others is to demonstrate what is possible, and then to explain how it is possible.

And one way to do that is to use a simulation game.

There are many different forms of simulation game from the imagination playground games we remember as children, to sophisticated and highly realistic computer simulations.

The purpose is the same: to have the experience without the risk and cost of doing it for real; to learn from the experience; and to increase our chance of success in the real world.


Simulations are very effective educational tools because we can simplify, focus, practice, pause, rewind, and reflect.

They are also very effective exploration tools for developing our understanding of hows things work.  We need to know that before we can make things work better.


And anyone who has tried it will confirm: creating an effective and enjoyable simulation game is not easy. It takes passion, persistence and practice and many iterations to get it right.

And that in itself is a powerful learning experience.


This week the topic of simulations has cropped up several times.

Firstly, the hands-on simulations at the Flow Design Practical Skills Workshop and how they generated insight and inspiration.  The experience certainly fired imaginations and will hopefully lead to innovations. For more click here …

Secondly, the computer simulation called the “Save The NHS Game” which is designed to illustrate the complex and counter-intuitive behaviour of real systems.  The rookie crew “crashed” the simulated healthcare system, but that was OK, it was just a simulation.  In the process they learned a lot about how not to improve NHS productivity. For more click here …

And later the same day being a crash-test dummy for an innovative table-top simulation game using different sizes and shapes of pasta and an ice tray to illustrate the confusing concept of carve-out!  For more click here …

And finally, a fantastic conversation with Dr Bryn Baxendale from the Trent Simulation Centre about how simulation training has become a growing part of how we train individuals and teams, especially in clinical skills, safety and human factors.


In health care systems engineering we use simulation tools in the diagnosis, design and delivery phases of complex improvement-by-design projects. So learning how to design, build and verify the simulation tools we need is a core part advanced HCSE training.  For more click here …

Lots of simulation sTimulation. What a great week!

The Elephant in the Room is an English-language metaphorical idiom for an obvious problem or risk no one wants to discuss.

An undiscussable topic.

And the undiscussability is also undiscussable.

So the problem or risk persists.

And people come to harm as a result.

Which is not the intended outcome.

So why do we behave this way?

Perhaps it is because the problem looks too big and too complicated to solve in one intuitive leap, and we give up and label it a “wicked problem”.


The well known quote “When eating an elephant take one bite at a time” is attributed to Creighton Abrams, a US Chief of Staff.


It says that even seemingly “impossible” problems can be solved so long as we proceed slowly and carefully, in small steps, learning as we go.

And the continued decline of the NHS UK Unscheduled Care performance seems to be an Elephant-in-the-Room problem, as shown by the monthly A&E 4-hour performance over the last 10 years and the fact that this chart is not published by the NHS.

Red = England, Brown=Wales, Grey=N.Ireland, Purple=Scotland.


This week I experienced a bite of this Elephant being taken and chewed on.

The context was a Flow Design – Practical Skills – One Day Workshop and the design challenge posed to the eager delegates was to improve the quality and efficiency of a one stop clinic.

A seemingly impossible task because the delegates reported that the queues, delays and chaos that they experienced in the simulated clinic felt very realistic.

Which means that this experience is accepted as inevitable, and is impossible to improve without more resources, but financial cuts prevent that, so we have to accept the waits.


At the end of the day their belief had been shattered.

The queues, delays and chaos had evaporated and the cost to run the new one stop clinic design was actually less than the old one.

And when we combined the quality metrics with the cost metrics and calculated the measured improvement in productivity; the answer was over 70%!

The delegates experienced it all first-hand. They did the diagnosis, design, and delivery using no more than squared-paper and squeaky-pen.

And at the end they were looking at a glaring mismatch between their rhetoric and the reality.

The “impossible to improve without more money” hypothesis lay in tatters – it had been rationally, empirically and scientifically disproved.

I’d call that quite a big bite out of the Elephant-in-the-Room.


So if you have a healthy appetite for Elephant-in-the-Room challenges, and are not afraid to try something different, then there is a whole menu of nutritious food-for-thought at a FISH&CHIPs® practical skills workshop.

This is the now-infamous statement that Donald Rumsfeld made at a Pentagon Press Conference which triggered some good-natured jesting from the assembled journalists.

But there is a problem with it.

There is a fourth combination that he does not mention: the Unknown-Knowns.

Which is a shame because they are actually the most important because they cause the most problems.  Avoidable problems.


Suppose there is a piece of knowledge that someone knows but that someone else does not; then we have an unknown-known.

None of us know everything and we do not need to, because knowledge that is of no value to us is irrelevant for us.

But what happens when the unknown-known is of value to us, and more than that; what happens when it would be reasonable for someone else to expect us to know it; because it is our job to know.


A surgeon would be not expected to know a lot about astronomy, but they would be expected to know a lot about anatomy.


So, what happens if we become aware that we are missing an important piece of knowledge that is actually already known?  What is our normal human reaction to that discovery?

Typically, our first reaction is fear-driven and we express defensive behaviour.  This is because we fear the potential loss-of-face from being exposed as inept.

From this sudden shock we then enter a characteristic emotional pattern which is called the Nerve Curve.

After the shock of discovery we quickly flip into denial and, if that does not work then to anger (i.e. blame).  We ignore the message and if that does not work we shoot the messenger.


And when in this emotionally charged state, our rationality tends to take a back seat.  So, if we want to benefit from the discovery of an unknown-known, then we have to learn to bite-our-lip, wait, let the red mist dissipate, and then re-examine the available evidence with a cool, curious, open mind.  A state of mind that is receptive and open to learning.


Recently, I was reminded of this.


The context is health care improvement, and I was using a systems engineering framework to conduct some diagnostic data analysis.

My first task was to run a data-completeness-verification-test … and the data I had been sent did not pass the test.  There was some missing.  It was an error of omission (EOO) and they are the hardest ones to spot.  Hence the need for the verification test.

The cause of the EOO was an unknown-known in the department that holds the keys to the data warehouse.  And I have come across this EOO before, so I was not surprised.

Hence the need for the verification test.

I was not annoyed either.  I just fed back the results of the test, explained what the issue was, explained the cause, and they listened and learned.


The implication of this specific EOO is quite profound though because it appears to be ubiquitous across the NHS.

To be specific it relates to the precise details of how raw data on demand, activity, length of stay and bed occupancy is extracted from the NHS data warehouses.

So it is rather relevant to just about everything the NHS does!

And the error-of-omission leads to confusion at best; and at worst … to the following sequence … incomplete data =>  invalid analysis => incorrect conclusion => poor decision => counter-productive action => unintended outcome.

Does that sound at all familiar?


So, if would you like to learn about this valuable unknown-known is then I recommend the narrative by Dr Kate Silvester, an internationally recognised expert in healthcare improvement.  In it, Kate re-tells the story of her emotional roller-coaster ride when she discovered she was making the same error.


Here is the link to the full abstract and where you can download and read the full text of Kate’s excellent essay, and help to make it a known-known.

That is what system-wide improvement requires – sharing the knowledge.

There is a Catch-22 in health care improvement and it goes a bit like this:

Most people are too busy fire-fighting the chronic chaos to have time to learn how to prevent the chaos, so they are stuck.

There is a deeper Catch-22 as well though:

The first step in preventing chaos is to diagnose the root cause and doing that requires experience, and we don’t have that experience available, and we are too busy fire-fighting to develop it.


Health care is improvement science in action – improving the physical and psychological health of those who seek our help. Patients.

And we have a tried-and-tested process for doing it.

First we study the problem to arrive at a diagnosis; then we design alternative plans to achieve our intended outcome and we decide which plan to go with; and then we deliver the plan.

Study ==> Plan ==> Do.

Diagnose  ==> Design & Decide ==> Deliver.

But here is the catch. The most difficult step is the first one, diagnosis, because there are many different illnesses and they often present with very similar patterns of symptoms and signs. It is not easy.

And if we make a poor diagnosis then all the action plans that follow will be flawed and may lead to disappointment and even harm.

Complaints and litigation follow in the wake of poor diagnostic ability.

So what do we do?

We defer reassuring our patients, we play safe, we request more tests and we refer for second opinions from specialists. Just to be on the safe side.

These understandable tactics take time, cost money and are not 100% reliable.  Diagnostic tests are usually precisely focused to answer specific questions but can have false positive and false negative results.

To request a broad batch of tests in the hope that the answer will appear like a rabbit out of a magician’s hat is … mediocre medicine.


This diagnostic dilemma arises everywhere: in primary care and in secondary care, and in non-urgent and urgent pathways.

And it generates extra demand, more work, bigger queues, longer delays, growing chaos, and mounting frustration, disappointment, anxiety and cost.

The solution is obvious but seemingly impossible: to ensure the most experienced diagnostician is available to be consulted at the start of the process.

But that must be impossible because if the consultants were seeing the patients first, what would everyone else do?  How would they learn to become more expert diagnosticians? And would we have enough consultants?


When I was a junior surgeon I had the great privilege to have the opportunity to learn from wise and experienced senior surgeons, who had seen it, and done it and could teach it.

Mike Thompson is one of these.  He is a general surgeon with a special interest in the diagnosis and treatment of bowel cancer.  And he has a particular passion for improving the speed and accuracy of the diagnosis step; because it can be a life-saver.

Mike is also a disruptive innovator and an early pioneer of the use of endoscopy in the outpatient clinic.  It is called point-of-care testing nowadays, but in the 1980’s it was a radically innovative thing to do.

He also pioneered collecting the symptoms and signs from every patient he saw, in a standard way using a multi-part printed proforma. And he invested many hours entering the raw data into a computer database.

He also did something that even now most clinicians do not do; when he knew the outcome for each patient he entered that into his database too – so that he could link first presentation with final diagnosis.


Mike knew that I had an interest in computer-aided diagnosis, which was a hot topic in the early 1980’s, and also that I did not warm to the Bayesian statistical models that underpinned it.  To me they made too many simplifying assumptions.

The human body is a complex adaptive system. It defies simplification.

Mike and I took a different approach.  We  just counted how many of each diagnostic group were associated with each pattern of presenting symptoms and signs.

The problem was that even his database of 8000+ patients was not big enough! This is why others had resorted to using statistical simplifications.

So we used the approach that an experienced diagnostician uses.  We used the information we had already gleaned from a patient to decide which question to ask next, and then the next one and so on.


And we always have three pieces of information at the start – the patient’s age, gender and presenting symptom.

What surprised and delighted us was how easy it was to use the database to help us do this for the new patients presenting to his clinic; the ones who were worried that they might have bowel cancer.

And what surprised us even more was how few questions we needed to ask arrive at a statistically robust decision to reassure-or-refer for further tests.

So one weekend, I wrote a little computer program that used the data from Mike’s database and our simple bean-counting algorithm to automate this process.  And the results were amazing.  Suddenly we had a simple and reliable way of using past experience to support our present decisions – without any statistical smoke-and-mirror simplifications getting in the way.

The computer program did not make the diagnosis, we were still responsible for that; all it did was provide us with reliable access to a clear and comprehensive digital memory of past experience.


What it then enabled us to do was to learn more quickly by exploring the complex patterns of symptoms, signs and outcomes and to develop our own diagnostic “rules of thumb”.

We learned in hours what it would take decades of experience to uncover. This was hot stuff, and when I presented our findings at the Royal Society of Medicine the audience was also surprised and delighted (and it was awarded the John of Arderne Medal).

So, we called it the Hot Learning System, and years later I updated it with Mike’s much bigger database (29,000+ records) and created a basic web-based version of the first step – age, gender and presenting symptom.  You can have a play if you like … just click HERE.


So what are the lessons here?

  1. We need to have the most experienced diagnosticians at the start of the improvement process.
  2. The first diagnostic assessment can be very quick so long as we have developed evidence-based heuristics.
  3. We can accelerate the training in diagnostic skills using simple information technology and basic analysis techniques.

And exactly the same is true in the health care system improvement.

We need to have an experienced health care improvement practitioner involved at the start, because if we skip this critical study step and move to plan without a correct diagnosis, then we will make errors, poor decisions, and counter-productive actions.  And then generate more work, more queues, more delays, more chaos, more distress and increased costs.

Exactly the opposite of what we want.

Q1: So, how do we develop experienced improvement practitioners more quickly?

Q2: Is there a hot learning system for improvement science?

A: Yes, there is. It can be found here.

Have you heard the phrase “you either love it or you hate it“?  It is called the Marmite Effect.

Improvement science has Marmite-like effect on some people, or more specifically, the theory part does.

Both evidence and experience show that most people prefer to learn-by-doing first; and then consolidate their learning with the minimum, necessary amount of supporting theory.

But that is not how we usually share what we know with others.  We usually attempt to teach the theory first, perhaps in the belief that it will speed up the process of learning.

Sadly, it usually has the opposite effect. Too much theory too soon often creates a barrier to engagement. It actually slows learning down! Which was not the impact we were intending.


The implications of this is that teachers of the science of improvement need to provide a range of different ways to engage with the subject.  Complementary ways.  And leave the choice of which suits whom … to the learner.

And the way to tell if it is working is … the sound of laughter.

Why is that?


Laughing is a complex behaviour that leaves us feeling happier. Which is good.

Comedians make a living from being able to trigger this behaviour in their audiences, and we will gladly part with hard cash when we know something will make us feel better.

And laughing is one of the healthiest ways to feel better!

So why do we laugh when we are learning?

It is believed that one trigger for the laughter reaction is the sudden shift from one perspective to another.  More specifically, a mental shift that relieves a growing emotional tension.  The punch line of a really good joke for example.

And later-in-life learning is often more a process of unlearning.

When we challenge a learned assumption with evidence and if we disprove it … we are unlearning.  And doing that generates emotional tension. We are often very attached to our unconscious assumptions and will usually resist them being challenged.

The way to unlearn effectively is to use the evidence of our own eyes to raise doubts about our unconscious assumptions.  We need to actively generate a bit of confusion.

Then, we resolve the apparent paradox by creatively shifting perspective, often with a real example, a practical explanation or a hands-on demonstration.

And when we experience the “Ah ha! Now I see!” reaction, and we emerge from the fog of confusion, we will relieve the emotional tension and our involuntary reaction is to laugh.

But if our teacher unintentionally triggers a Marmite effect; a “Yeuk, I am NOT enjoying this!” feeling, then we need to respect that, and step back, and adopt a different tack.


Over the last few months I have been experimenting with different approaches to introducing the principles of improvement-by-design.

And the results are clear.

A minority prefer to start with the abstract theory, and then apply it in practice.

The majority have various degrees of Marmite reaction to the theory, and some are so put off that they actively disengage.  But when they have an opportunity to see the same principles demonstrated in a concrete, practical way; they learn and laugh.

Unlearning-by-doing seems to work better for the majority.

So, if you want to have fun and learn how to deliver significant and sustained improvements … then the evidence points to this as the starting point …

… the Flow Design Practical Skills One Day Workshop.

And if you also want to dip into a bit of the tried-and-tested theory that underpins improvement-by-design then you can do that as well, either before or later (when it becomes necessary), or both.


So, to have lots of fun and learn some valuable improvement-by-design practical skills at the same time …  click here.

This week about thirty managers and clinicians in South Wales conducted two experiments to test the design of the Flow Design Practical Skills One Day Workshop.

Their collective challenge was to diagnose and treat a “chronically sick” clinic and the majority had no prior exposure to health care systems engineering (HCSE) theory, techniques, tools or training.

Two of the group, Chris and Jat, had been delegates at a previous ODWS, and had then completed their Level-1 HCSE training and real-world projects.

They had seen it and done it, so this experiment was to test if they could now teach it.

Could they replicate the “OMG effect” that they had experienced and that fired up their passion for learning and using the science of improvement?

Read on »

In medical training we have to learn about lots of things. That is one reason why it takes a long time to train a competent and confident clinician.

First, we learn the anatomy (structure) and the physiology (function) of the normal, healthy human.

Then we learn about how this amazingly complicated system can go wrong.  We learn about pathology.  And we do that so that we understand the relationship between the cause (disease) and the effect (symptoms and signs).

Then we learn about diagnostics – which is how to work backwards from the effects to the most likely cause(s).

And only then can we learn about therapeutics – the design and delivery of a treatment plan that we are confident will relieve the symptoms by curing the disease.

And we learn about prevention – how to avoid some illnesses (and delay others) by addressing the root causes earlier.  Much of the increase in life expectancy over the last 200 years has come from prevention, not from cure.


The NHS is an amazingly complicated system, and it too can go wrong.  It can exhibit a wide spectrum of symptoms and signs; medical errors, long delays, unhappy patients, burned-out staff, and overspent budgets.

But, there is no equivalent training in how to diagnose and treat a sick health care system.  And this is not acceptable, especially given that the knowledge of how to do this is already available.

It is called complex adaptive systems engineering (CASE).


Before the Renaissance, the understanding of how the body works was primitive and it was believed that illness was “God’s Will” so we had to just grin-and-bear (and pray).

The Scientific Revolution brought us new insights, profound theories, innovative techniques and capability-extending tools.  And the impact has been dramatic.  Those who do have access to this knowledge live better and longer than ever.  Those who do not … do not.

Our current understanding of how health care systems work is, to be blunt, medieval.  The current approaches amount to little more than rune reading, incantations and the prescription of purgatives and leeches.  And the impact is about as effective.

So we need to study the anatomy, physiology, pathology, diagnostics and therapeutics of complex adaptive systems like healthcare.  And most of all we need to understand how to prevent catastrophes happening in the first place.  We need the NHS to be immortal.


And this week a prototype complex adaptive pathology training system was tested … and it employed cutting-edge 21st Century technology: Pasta Twizzles.

The specific topic under scrutiny was variation.  A brain-bending concept that is usually relegated to the mystical smoke-and-mirrors world called “Sadistics”.

But no longer!

The Mists-of-Jargon and Fog-of-Formulae were blown away as we switched on the Fan-of-Facilitation and the Light-of-Simulation and went exploring.

Empirically. Pragmatically.


And what we discovered was jaw-dropping.

A disease called the “Flaw of Averages” and its malignant manifestation “Carveoutosis“.


And with our new knowledge we opened the door to a previously hidden world of opportunity and improvement.

Then we activated the Laser-of-Insight and evaporated the queues and chaos that, before our new understanding, we had accepted as inevitable and beyond our understanding or control.

They were neither. And never had been. We were deluding ourselves.

Welcome to the Resilient Design – Practical Skills – One Day Workshop.

Validation Test: Passed.

A story was shared this week.

A story of hope for the hard-pressed NHS, its patients, its staff and its managers and its leaders.

A story that says “We can learn how to fix the NHS ourselves“.

And the story comes with evidence; hard, objective, scientific, statistically significant evidence.


The story starts almost exactly three years ago when a Clinical Commissioning Group (CCG) in England made a bold strategic decision to invest in improvement, or as they termed it “Achieving Clinical Excellence” (ACE).

They invited proposals from their local practices with the “carrot” of enough funding to allow GPs to carve-out protected time to do the work.  And a handful of proposals were selected and financially supported.

This is the story of one of those proposals which came from three practices in Sutton who chose to work together on a common problem – the unplanned hospital admissions in their over 70’s.

Their objective was clear and measurable: “To reduce the cost of unplanned admissions in the 70+ age group by working with hospital to reduce length of stay.

Did they achieve their objective?

Yes, they did.  But there is more to this story than that.  Much more.


One innovative step they took was to invest in learning how to diagnose why the current ‘system’ was costing what it was; then learning how to design an improvement; and then learning how to deliver that improvement.

They invested in developing their own improvement science skills first.

They did not assume they already knew how to do this and they engaged an experienced health care systems engineer (HCSE) to show them how to do it (i.e. not to do it for them).

Another innovative step was to create a blog to make it easier to share what they were learning with their colleagues; and to invite feedback and suggestions; and to provide a journal that captured the story as it unfolded.

And they measured stuff before they made any changes and afterwards so they could measure the impact, and so that they could assess the evidence scientifically.

And that was actually quite easy because the CCG was already measuring what they needed to know: admissions, length of stay, cost, and outcomes.

All they needed to learn was how to present and interpret that data in a meaningful way.  And as part of their IS training,  they learned how to use system behaviour charts, or SBCs.


By Jan 2015 they had learned enough of the HCSE techniques and tools to establish the diagnosis and start to making changes to the parts of the system that they could influence.


Two years later they subjected their before-and-after data to robust statistical analysis and they had a surprise. A big one!

Reducing hospital mortality was not a stated objective of their ACE project, and they only checked the mortality data to be sure that it had not changed.

But it had, and the “p=0.014” part of the statement above means that the probability that this 20.0% reduction in hospital mortality was due to random chance … is less than 1.4%.  [This is well below the 5% threshold that we usually accept as “statistically significant” in a clinical trial.]

But …

This was not a randomised controlled trial.  This was an intervention in a complicated, ever-changing system; so they needed to check that the hospital mortality for comparable patients who were not their patients had not changed as well.

And the statistical analysis of the hospital mortality for the ‘other’ practices for the same patient group, and the same period of time confirmed that there had been no statistically significant change in their hospital mortality.

So, it appears that what the Sutton ACE Team did to reduce length of stay (and cost) had also, unintentionally, reduced hospital mortality. A lot!


And this unexpected outcome raises a whole raft of questions …


If you would like to read their full story then you can do so … here.

It is a story of hunger for improvement, of humility to learn, of hard work and of hope for the future.

This is a snapshot of an experiment in progress.  The question being asked is “Can consultant surgeons be trained to be system flow designers in one day?”

On the left are Kate Silvester and Phil Debenham … their doctor/trainers. On the right are some brave volunteer consultant surgeons.

It is a tense moment. The focused concentration is palpable. It is a tough design assignment … a chronically chaotic one-stop outpatient clinic. They know it well.


They have the raw, unprocessed, data and they are deep into diagnosis mode.  On the other side of the room is another team of consultant surgeon volunteers who are struggling with the same challenge. Competition is in the air. Reputations are on the line. The game is on.

They are racing to generate this … a process template chart … that illustrates the conversion of raw event data into something visible and meaningful. A Gantt chart.

Their tools are basic – coloured pens and squared paper – just as Henry L. Gantt used in 1916 – a hundred years ago.

Hidden in this Gantt chart is the diagnosis, the open door to the path to improving this clinic design.  It is as plain as the nose on your face … if you know what to look for. They don’t. Well, … not yet.


Skip forwards to later in the experiment. Both teams have solved the ‘impossible’ problem. They have diagnosed the system design flaw that was causing the queues, chaos and waiting … and they have designed and verified a solution. With no more than squared paper and coloured pens.  Henry G would be delighted.

And they are justifiably proud of their achievement because, when they tested their design in the real world, it showed that the queues and chaos had “evaporated”.  And it cost … nothing.


At the start of the experiment they were unaware of what was possible. At the end of the experiment they knew how to do it. In one day.

The question: ‘”Can consultant surgeons be trained to be system flow designers in one day?”

The answer: “Yes”


For more posts like this please vote here.
For more information please subscribe here.

About a year ago we looked back at the previous 10 years of NHS unscheduled care performance …

click here to read

… and warned that a catastrophe was on the way because we had created a urgent care “pressure cooker”.

Did waving the red warning flag make any difference?

It seems not.

The catastrophe happened just as predicted … A&E performance slumped to an all-time low, and has not recovered.


A pressure cooker is an elegantly simple system – a strong metal box with a sealed lid and a pressure-sensitive valve.  Food cooks more quickly at a higher temperature, and we can increase the boiling point of water by increasing the ambient pressure, so all we need to do is put some water in the cooker, close the lid, set the pressure limit we want (i.e. the temperature we want) and apply some heat.  Simple.  As the water boils the steam increases the pressure inside, until the regulator valve opens and lets a bit of steam out. The more heat we apply – the faster the steam comes out – but the internal pressure and temperature remain constant. An elegant self-regulating system.


Our unscheduled care acute hospital pressure cooker design is very similar – but it has an additional feature – we can squeeze raw patients in through a one-way valve labelled “admissions” and the internal pressure will squeeze them out through another one-way pressure-sensitive valve called “discharges”.

But there is not much head-space inside our hospital (i.e. empty beds) so pushing patients in will increase the pressure inside, and it will trigger an internal reaction called “fire-fighting” that generates heat (but sadly no insight).  When the internal pressure reaches the critical level, patients are squeezed out; ready-or-not.

What emerges from the chaotic cauldron is a mixture of under-cooked, just-right, and over-cooked patients.  And we then conduct quality control audits and we label what we find as “quality variation”, but it looks random so it gives us no clues as to what to do next.

Equilibrium is eventually achieved – what goes in comes out – the pressure and temperature auto-regulate – the chaos becomes chronic – and the quality of the output is predictably unpredictable, with some of it badly but randomly spoiled (i.e. harmed).

And our auto-regulating pressure cooker is very resistant to external influences, which after all is one of its key design features.


Squeezing a bit less in (i.e. admissions avoidance) does not make any difference to the internal pressure and temperature.  It auto-regulates.  The reduced flow means longer cooking time and we just get less under-cooked and more over-cooked output.  Oh, and we go bust because our revenue has reduced but our costs have not.

Building a bigger pressure cooker (i.e. adding more beds) does not make any sustained difference either.  Again the system auto-regulates.  The extra space allows a longer cooking time – and again we get less under-cooked and more over-cooked output.  Oh, and we still go bust (same revenue but increased cost).

Turning down the heat (i.e. reducing the 4 hr A&E lead time target yield from 98% to 95%) does not make any difference. Our elegant auto-regulating design adjusts itself to sustain the internal pressure and temperature.  Output is still variable, but least we do not go bust.


This metaphor may go some way to explain why the intuitively obvious “initiatives” to improve unscheduled care performance have had no significant or sustained impact.

And what is more worrying is that they may even have made the situation worse.

Working inside an urgent care pressure cooker is dangerous.  People get emotionally damaged and scarred.


The good news is that a different approach is available … a health and social care systems engineering (HSCSE) approach … one that we could use to change the fundamental design from fire-fighter to flow-facilitator.

Using HSCSE theory, techniques and tools we could specify, design, build, verify, implement and validate a low-pressure, low-resistance, low-wait, low-latency, high-efficiency unscheduled care flow design that is safe, timely, effective and affordable.

An emergency care “Dyson” so to speak.

But we are not training our people how to do that.

Why is that?


For more posts like this please vote here.
For more information please subscribe here.
To email the author please click here.

businessman_cloud_periscope_18347The path from chaos to calm is not clearly marked.  If it were we would not have chaotic health care processes, anxious patients, frustrated staff and escalating costs.

Many believe that there is no way out of the chaos. They have given up trying.

Some still nurture the hope that there is a way and are looking for a path through the fog of confusion.

A few know that there is a way out because they have been shown a path from chaos to calm and can show others how to find it.

Someone, a long time ago, explored the fog and discovered clarity of understanding on the far side, and returned with a Map of the Mind-field.


Q: What is causing The Fog?

When hot rhetoric meets cold reality the fog of disillusionment forms.

Q: Where does the hot rhetoric come from?

Passionate, well-intended and ill-informed people in positions of influence, authority and power. The orators, debaters and commentators.

They do not appear to have an ability to diagnose and to design, so cannot generate effective decisions and coordinate efficient delivery of solutions.

They have not learned how and seem to be unaware of it.

If they had, then they would be able to show that there is a path from chaos to calm.

A safe, quick, surprisingly enjoyable and productive path.

If they had the know-how then they could pull from the front in the ‘right’ direction, rather than push from the back in the ‘wrong’ one.


And the people who are spreading this good news are those who have just emerged from the path.  Their own fog of confusion evaporating as they discovered the clarity of hindsight for themselves.

Ah ha!  Now I see! Wow!  The view from the far side of The Fog is amazing and exciting. The opportunity and potential is … unlimited.  I must share the news. I must tell everyone! I must show them how-to.

Here is a story from Chris Jones who has recently emerged from The Fog.

And here is a description of part of the Mind-field Map, narrated in 2008 by Kate Silvester, a doctor and manufacturing systems engineer.

stick_figure_help_button_150_wht_9911Imagine this scenario:

You develop some non-specific symptoms.

You see your GP who refers you urgently to a 2 week clinic.

You are seen, assessed, investigated and informed that … you have cancer!


The shock, denial, anger, blame, bargaining, depression, acceptance sequence kicks off … it is sometimes called the Kübler-Ross grief reaction … and it is a normal part of the human psyche.

But there is better news. You also learn that your condition is probably treatable, but that it will require chemotherapy, and that there are no guarantees of success.

You know that time is of the essence … the cancer is growing.

And time has a new relevance for you … it is called life time … and you know that you may not have as much left as you had hoped.  Every hour is precious.


So now imagine your reaction when you attend your local chemotherapy day unit (CDU) for your first dose of chemotherapy and have to wait four hours for the toxic but potentially life-saving drugs.

They are very expensive and they have a short shelf-life so the NHS cannot afford to waste any.   The Aseptic Unit team wait until all the safety checks are OK before they proceed to prepare your chemotherapy.  That all takes time, about four hours.

Once the team get to know you it will go quicker. Hopefully.

It doesn’t.

The delays are not the result of unfamiliarity … they are the result of the design of the process.

All your fellow patients seem to suffer repeated waiting too, and you learn that they have been doing so for a long time.  That seems to be the way it is.  The waiting room is well used.

Everyone seems resigned to the belief that this is the best it can be.

They are not happy about it but they feel powerless to do anything.


Then one day someone demonstrates that it is not the best it can be.

It can be better.  A lot better!

And they demonstrate that this better way can be designed.

And they demonstrate that they can learn how to design this better way.

And they demonstrate what happens when they apply their new learning …

… by doing it and by sharing their story of “what-we-did-and-how-we-did-it“.

CDU_Waiting_Room

If life time is so precious, why waste it?

And perhaps the most surprising outcome was that their safer, quicker, calmer design was also 20% more productive.

frailsafeSafe means avoiding harm, and safety is an emergent property of a well-designed system.

Frail means infirm, poorly, wobbly and at higher risk of harm.

So we want our health care system to be a FrailSafe Design.

But is it? How would we know? And what could we do to improve it?


About ten years ago I was involved in a project to improve the safety design of a specific clinical stream flowing through the hospital that I work in.

The ‘at risk’ group of patients were frail elderly patients admitted as an emergency after a fall and who had suffered a fractured thigh bone. The neck of the femur.

Historically, the outcome for these patients was poor.  Many do not survive, and many of the survivors never returned to independent living. They become even more frail.


The project was undertaken during an organisational transition, the hospital was being ‘taken over’ by a bigger one.  This created a window of opportunity for some disruptive innovation, and the project was labelled as a ‘Lean’ one because we had been inspired by similar work done at Bolton some years before and Lean was the flavour of the month.

The actual change was small: it was a flow design tweak that cost nothing to implement.

First we asked two flow questions:
Q1: How many of these high-risk frail patients do we admit a year?
A1: About one per day on average.
Q2: What is the safety critical time for these patients?
A2: The first four days.  The sooner they have hip surgery and are able to be actively mobilise the better their outcome.

Second we applied Little’s Law which showed the average number of patients in this critical phase is four. This was the ‘work in progress’ or WIP.

And we knew that variation is always present, and we knew that having all these patients in one place would make it much easier for the multi-disciplinary teams to provide timely care and to avoid potentially harmful delays.

So we suggested that one six-bedded bay on one of the trauma wards be designated the Fractured Neck Of Femur bay.

That was the flow diagnosis and design done.

The safety design was created by the multi-disciplinary teams who looked after these patients: the geriatricians, the anaesthetists, the perioperative emergency care team (PECT), the trauma and orthopaedic team, the physiotherapists, and so on.

They designed checklists to ensure that all #NOF patients got what they needed when they needed it and so that nothing important was left to chance.

And that was basically it.

And the impact was remarkable. The stream flowed. And one measured outcome was a dramatic and highly statistically significant reduction in mortality.

Injury_2011_Results
The full paper was published in Injury 2011; 42: 1234-1237.

We had created a FrailSafe Design … which implied that what was happening before was clearly not safe for these frail patients!


And there was an improved outcome for the patients who survived: A far larger proportion rehabilitated and returned to independent living, and a far smaller proportion required long-term institutional care.

By learning how to create and implement a FrailSafe Design we had added both years-to-life and life-to-years.

It cost nothing to achieve and the message was clear, as this quote is from the 2011 paper illustrates …

Injury_2011_Message

What was a bit disappointing was the gap of four years between delivering this dramatic and highly significant patient safety and quality improvement and the sharing of the story.


What is more exciting is that the concept of FrailSafe is growing, evolving and spreading.

figure_pointing_out_chart_data_150_clr_8005It was the time for Bob and Leslie’s regular Improvement Science coaching session.

<Leslie> Hi Bob, how are you today?

<Bob> I am getting over a winter cold but otherwise I am good.  And you?

<Leslie> I am OK and I need to talk something through with you because I suspect you will be able to help.

<Bob> OK. What is the context?

<Leslie> Well, one of the projects that I am involved with is looking at the elderly unplanned admission stream which accounts for less than half of our unplanned admissions but more than half of our bed days.

<Bob> OK. So what were you looking to improve?

<Leslie> We want to reduce the average length of stay so that we free up beds to provide resilient space-capacity to ease the 4-hour A&E admission delay niggle.

<Bob> That sounds like a very reasonable strategy.  So have you made any changes and measured any improvements?

<Leslie> We worked through the 6M Design® sequence. We studied the current system, diagnosed some time traps and bottlenecks, redesigned the ones we could influence, modified the system, and continued to measure to monitor the effect.

<Bob> And?

<Leslie> It feels better but the system behaviour charts do not show an improvement.

<Bob> Which charts, specifically?

<Leslie> The BaseLine XmR charts of average length of stay for each week of activity.

<Bob> And you locked the limits when you made the changes?

<Leslie> Yes. And there still were no red flags. So that means our changes have not had a significant effect. But it definitely feels better. Am I deluding myself?

<Bob> I do not believe so. Your subjective assessment is very likely to be accurate. Our Chimp OS 1.0 is very good at some things! I think the issue is with the tool you are using to measure the change.

<Leslie> The XmR chart?  But I thought that was THE tool to use?

<Bob> Like all tools it is designed for a specific purpose.  Are you familiar with the term Type II Error.

<Leslie> Doesn’t that come from research? I seem to remember that is the error we make when we have an under-powered study.  When our sample size is too small to confidently detect the change in the mean that we are looking for.

<Bob> A perfect definition!  The same error can happen when we are doing before and after studies too.  And when it does, we see the pattern you have just described: the process feels better but we do not see any red flags on our BaseLine© chart.

<Leslie> But if our changes only have a small effect how can it feel better?

<Bob> Because some changes have cumulative effects and we omit to measure them.

<Leslie> OMG!  That makes complete sense!  For example, if my bank balance is stable my average income and average expenses are balanced over time. So if I make a small-but-sustained improvement to my expenses, like using lower cost generic label products, then I will see a cumulative benefit over time to the balance, but not the monthly expenses; because the noise swamps the signal on that chart!

<Bob> An excellent analogy!

<Leslie> So the XmR chart is not the tool for this job. And if this is the only tool we have then we risk making a Type II error. Is that correct?

<Bob> Yes. We do still use an XmR chart first though, because if there is a big enough and fast enough shift then the XmR chart will reveal it.  If there is not then we do not give up just yet; we reach for our more sensitive shift detector tool.

<Leslie> Which is?

<Bob> I will leave you to ponder on that question.  You are a trained designer now so it is time to put your designer hat on and first consider the purpose of this new tool, and then create the outline a fit-for-purpose design.

<Leslie> OK, I am on the case!

Hypothesis: Chaotic behaviour of healthcare systems is inevitable without more resources.

This appears to be a rather widely held belief, but what is the evidence?

Can we disprove this hypothesis?

Chaos is a predictable, emergent behaviour of many systems, both natural and man made, a discovery that was made rather recently, in the 1970’s.  Chaotic behaviour is not the same as random behaviour.  The fundamental difference is that random implies independence, while chaos requires the opposite: chaotic systems have interdependent parts.

Chaotic behaviour is complex and counter-intuitive, which may explain why it took so long for the penny to drop.


Chaos is a complex behaviour and it is tempting to assume that complicated structures always lead to complex behaviour.  But they do not.  A mechanical clock is a complicated structure but its behaviour is intentionally very stable and highly predictable – that is the purpose of a clock.  It is a fit-for-purpose design.

The healthcare system has many parts; it too is a complicated system; it has a complicated structure.  It is often seen to demonstrate chaotic behaviour.

So we might propose that a complicated system like healthcare could also be stable and predictable. If it were designed to be.


But there is another critical factor to take into account.

A mechanical clock only has inanimate cogs and springs that only obey the Laws of Physics – and they are neither adaptable nor negotiable.

A healthcare system is different. It is a living structure. It has patients, providers and purchasers as essential components. And the rules of how people work together are both negotiable and adaptable.

So when we are thinking about a healthcare system we are thinking about a complex adaptive system or CAS.

And that changes everything!


The good news is that adaptive behaviour can be a very effective anti-chaos strategy, if it is applied wisely.  The not-so-good news is that if it is not applied wisely then it can actually generate even more chaos.


Which brings us back to our hypothesis.

What if the chaos we are observing on out healthcare system is actually iatrogenic?

What if we are unintentionally and unconsciously generating it?

These questions require an answer because if we are unwittingly contributing to the chaos, with insight, understanding and wisdom we can intentionally calm it too.

These questions also challenge us to study our current way of thinking and working.  And in that challenge we will need to demonstrate a behaviour called humility. An ability to acknowledge that there are gaps in our knowledge and our understanding. A willingness to learn.


This all sounds rather too plausible in theory. What about an example?

Let us consider the highest flow process in healthcare: the outpatient clinic stream.

The typical design is a three-step process called the New-Test-Review design. This sequential design is simpler because the steps are largely independent of each other. And this simplicity is attractive because it is easier to schedule so is less likely to be chaotic. The downsides are the queues and delays between the steps and the risk of getting lost in the system. So if we are worried that a patient may have a serious illness that requires prompt diagnosis and treatment (e.g. cancer), then this simpler design is actually a potentially unsafe design.

A one-stop clinic is a better design because the New-Test-Review steps are completed in one visit, and that is better for everyone. But, a one-stop clinic is a more challenging scheduling problem because all the steps are now interdependent, and that is fertile soil for chaos to emerge.  And chaos is exactly what we often see.

Attending a chaotic one-stop clinic is frustrating experience for both patients and staff, and it is also less productive use of resources. So the chaos and cost appears to be price we are asked to pay for a quicker and safer design.

So is the one stop clinic chaos inevitable, or is it avoidable?

Simple observation of a one stop clinic shows that the chaos is associated with queues – which are visible as a waiting room full of patients and front-of-house staff working very hard to manage the queue and to signpost and soothe the disgruntled patients.

What if the one stop clinic queue and chaos is iatrogenic? What if it was avoidable without investing in more resources? Would the chaos evaporate? Would the quality improve?  Could we have a safer, calmer, higher quality and more productive design?

Last week I shared evidence that proved the one-stop clinic chaos was iatrogenic – by showing it was avoidable.

A team of healthcare staff were shown how to diagnose the cause of the queue and were then able to remove that cause, and to deliver the same outcome without the queue and the associated chaos.

And the most surprising lesson that the team learned was that they achieved this improvement using the same resources as before; and that those resources also felt the benefit of the chaos evaporating. Their work was easier, calmer and more predictable.

The impossible-without-more-resources hypothesis had been disproved.

So, where else in our complicated and complex healthcare system might we apply anti-chaos?

Everywhere?


And for more about complexity science see Santa Fe Institute

stick_figure_magic_carpet_150_wht_5040It was the appointed time for Bob and Leslie’s regular coaching session as part of the improvement science practitioner programme.

<Leslie> Hi Bob, I am feeling rather despondent today so please excuse me in advance if you hear a lot of “Yes, but …” language.

<Bob> I am sorry to hear that Leslie. Do you want to talk about it?

<Leslie> Yes, please.  The trigger for my gloom was being sent on a mandatory training workshop.

<Bob> OK. Training to do what?

<Leslie> Outpatient demand and capacity planning!

<Bob> But you know how to do that already, so what is the reason you were “sent”?

<Leslie> Well, I am no longer sure I know how to it.  That is why I am feeling so blue.  I went more out of curiosity and I came away utterly confused and with my confidence shattered.

<Bob> Oh dear! We had better start at the beginning.  What was the purpose of the workshop?

<Leslie> To train everyone in how to use an Outpatient Demand and Capacity planning model, an Excel one that we were told to download along with the User Guide.  I think it is part of a national push to improve waiting times for outpatients.

<Bob> OK. On the surface that sounds reasonable. You have designed and built your own Excel flow-models already; so where did the trouble start?

<Leslie> I will attempt to explain.  This was a paragraph in the instructions. I felt OK with this because my Improvement Science training has given me a very good understanding of basic demand and capacity theory.

IST_DandC_Model_01<Bob> OK.  I am guessing that other delegates may have felt less comfortable with this. Was that the case?

<Leslie> The training workshops are targeted at Operational Managers and the ones I spoke to actually felt that they had a good grasp of the basics.

<Bob> OK. That is encouraging, but a warning bell is ringing for me. So where did the trouble start?

<Leslie> Well, before going to the workshop I decided to read the User Guide so that I had some idea of how this magic tool worked.  This is where I started to wobble – this paragraph specifically …

IST_DandC_Model_02

<Bob> H’mm. What did you make of that?

<Leslie> It was complete gibberish to me and I felt like an idiot for not understanding it.  I went to the workshop in a bit of a panic and hoped that all would become clear. It didn’t.

<Bob> Did the User Guide explain what ‘percentile’ means in this context, ideally with some visual charts to assist?

<Leslie> No and the use of ‘th’ and ‘%’ was really confusing too.  After that I sort of went into a mental fog and none of the workshop made much sense.  It was all about practising using the tool without any understanding of how it worked. Like a black magic box.


<Bob> OK.  I can see why you were confused, and do not worry, you are not an idiot.  It looks like the author of the User Guide has unwittingly used some very confusing and ambiguous terminology here.  So can you talk me through what you have to do to use this magic box?

<Leslie> First we have to enter some of our historical data; the number of new referrals per week for a year; and the referral and appointment dates for all patients for the most recent three months.

<Bob> OK. That sounds very reasonable.  A run chart of historical demand and the raw event data for a Vitals Chart® is where I would start the measurement phase too – so long as the data creates a valid 3 month reporting window.

<Leslie> Yes, I though so too … but that is not how the black box model seems to work. The weekly demand is used to draw an SPC chart, but the event data seems to disappear into the innards of the black box, and recommendations pop out of it.

<Bob> Ah ha!  And let me guess the relationship between the term ‘percentile’ and the SPC chart of weekly new demand was not explained?

<Leslie> Spot on.  What does percentile mean?


<Bob> It is statistics jargon. Remember that we have talked about the distribution of the data around the average on a BaseLine chart; and how we use the histogram feature of BaseLine to show it visually.  Like this example.

IST_DandC_Model_03<Leslie> Yes. I recognise that. This chart shows a stable system of demand with an average of around 150 new referrals per week and the variation distributed above and below the average in a symmetrical pattern, falling off to zero around the upper and lower process limits.  I believe that you said that over 99% will fall within the limits.

<Bob> Good.  The blue histogram on this chart is called a probability distribution function, to use the terminology of a statistician.

<Leslie> OK.

<Bob> So, what would happen if we created a Pareto chart of demand using the number of patients per week as the categories and ignoring the time aspect? We are allowed to do that if the behaviour is stable, as this chart suggests.

<Leslie> Give me a minute, I will need to do a rough sketch. Does this look right?

IST_DandC_Model_04

<Bob> Perfect!  So if you now convert the Y-axis to a percentage scale so that 52 weeks is 100% then where does the average weekly demand of about 150 fall? Read up from the X-axis to the line then across to the Y-axis.

<Leslie> At about 26 weeks or 50% of 52 weeks.  Ah ha!  So that is what a percentile means!  The 50th percentile is the average, the zeroth percentile is around the lower process limit and the 100th percentile is around the upper process limit!

<Bob> In this case the 50th percentile is the average, it is not always the case though.  So where is the 85th percentile line?

<Leslie> Um, 52 times 0.85 is 44.2 which, reading across from the Y-axis then down to the X-axis gives a weekly demand of about 170 per week.  That is about the same as the average plus one sigma according to the run chart.

<Bob> Excellent. The Pareto chart that you have drawn is called a cumulative probability distribution function … and that is usually what percentiles refer to. Comparative Statisticians love these but often omit to explain their rationale to non-statisticians!


<Leslie> Phew!  So, now I can see that the 65th percentile is just above average demand, and 85th percentile is above that.  But in the confusing paragraph how does that relate to the phrase “65% and 85% of the time”?

<Bob> It doesn’t. That is the really, really confusing part of  that paragraph. I am not surprised that you looped out at that point!

<Leslie> OK. Let us leave that for another conversation.  If I ignore that bit then does the rest of it make sense?

<Bob> Not yet alas. We need to dig a bit deeper. What would you say are the implications of this message?


<Leslie> Well.  I know that if our flow-capacity is less than our average demand then we will guarantee to create an unstable queue and chaos. That is the Flaw of Averages trap.

<Bob> OK.  The creator of this tool seems to know that.

<Leslie> And my outpatient manager colleagues are always complaining that they do not have enough slots to book into, so I conclude that our current flow-capacity is just above the 50th percentile.

<Bob> A reasonable hypothesis.

<Leslie> So to calm the chaos the message is saying I will need to increase my flow capacity up to the 85th percentile of demand which is from about 150 slots per week to 170 slots per week. An increase of 7% which implies a 7% increase in costs.

<Bob> Good.  I am pleased that you did not fall into the intuitive trap that a increase from the 50th to the 85th percentile implies a 35/50 or 70% increase! Your estimate of 7% is a reasonable one.

<Leslie> Well it may be theoretically reasonable but it is not practically possible. We are exhorted to reduce costs by at least that amount.

<Bob> So we have a finance versus governance bun-fight with the operational managers caught in the middle: FOG. That is not the end of the litany of woes … is there anything about Did Not Attends in the model?


<Leslie> Yes indeed! We are required to enter the percentage of DNAs and what we do with them. Do we discharge them or re-book them.

<Bob> OK. Pragmatic reality is always much more interesting than academic rhetoric and this aspect of the real system rather complicates things, at least for a comparative statistician. This is where the smoke and mirrors will appear and they will be hidden inside the black magic box.  To solve this conundrum we need to understand the relationship between demand, capacity, variation and yield … and it is rather counter-intuitive.  So, how would you approach this problem?

<Leslie> I would use the 6M Design® framework and I would start with a map and not with a model; least of all a magic black box one that I did not design, build and verify myself.

<Bob> And how do you know that will work any better?

<Leslie> Because at the One Day ISP Workshop I saw it work with my own eyes. The queues, waits and chaos just evaporated.  And it cost nothing.  We already had more than enough “capacity”.

<Bob> Indeed you did.  So shall we do this one as an ISP-2 project?

<Leslie> An excellent suggestion.  I already feel my confidence flowing back and I am looking forward to this new challenge. Thank you again Bob.

CAS_DiagramThe theme this week has been emergent learning.

By that I mean the ‘ah ha’ moment that happens when lots of bits of a conceptual jigsaw go ‘click’ and fall into place.

When, what initially appears to be smoky confusion suddenly snaps into sharp clarity.  Eureka!  And now new learning can emerge.


This did not happen by accident.  It was engineered.


The picture above is part of a bigger schematic map of a system – in this case a system related to the global health challenge of escalating obesity.

It is a complicated arrangement of boxes and arrows. There are  dotted lines that outline parts of the system that have leaky boundaries like the borders on a political map.

But it is a static picture of the structure … it tells us almost nothing about the function, the system behaviour.

And our intuition tells us that, because it is a complicated structure, it will exhibit complex and difficult to understand behaviour.  So, guided by our inner voice, we toss it into the pile labelled Wicked Problems and look for something easier to work on.


Our natural assumption that a complicated structure always leads to complex behavior is an invalid simplification, and one that we can disprove in a matter of moments.


Exhibit 1. A system can be complicated and yet still exhibit simple, stable and predictable behavior.

Harrison_H1The picture is of a clock designed and built by John Harrison (1693-1776).  It is called H1 and it is a sea clock.

Masters of sailing ships required very accurate clocks to calculate their longitude, the East-West coordinate on the Earth’s surface. And in the 18th Century this was a BIG problem. Too many ships were getting lost at sea.

Harrison’s sea clock is complicated.  It has many moving parts, but it was the most stable and accurate clock of its time.  And his later ones were smaller, more accurate and even more complicated.


Exhibit 2.  A system can be simple yet still exhibit complex, unstable and unpredictable behavior.

Double-compound-pendulumThe image is of a pendulum made of only two rods joined by a hinge.  The structure is simple yet the behavior is complex, and this can only be appreciated with a dynamic visualisation.

The behaviour is clearly not random. It has structure. It is called chaotic.

So, with these two real examples we have disproved our assumption that a complicated structure always leads to complex behaviour; and we have also disproved its inverse … that complex behavior always comes from a complicated structure.


This deeper insight gives us hope.

We can design complicated systems to exhibit stable and predictable behaviour if, like John Harrison, we know how to.

But John Harrison was a rare, naturally-gifted, mechanical genius, and even with that advantage it took him decades to learn how to design and to build his sea clocks.  He was the first to do so and he was self-educated so his learning was emergent.

And to make it easier, he was working on a purely mechanical system comprised of non-living parts that only obeyed the Laws of Newtonian physics.


Our healthcare system is not quite like that.  The parts are living people whose actions are limited by physical Laws but whose decisions are steered by other policies … learned ones … and ones that can change.  They are called heuristics and they can vary from person-to-person and minute-to-minute.  Heuristics can be learned, unlearned, updated, and evolved.

This is called emergent learning.

And to generate it we only need to ‘engineer’ the context for it … the rest happens as if by magic … but only if we do the engineering well.


This week I personally observed over a dozen healthcare staff simultaneously re-invent a complicated process scheduling technique, at the same time as using it to eliminate the  queues, waiting and chaos in the system they wanted to improve.

Their queues just evaporated … without requiring any extra capacity or money. Eureka!


We did not show them how to do it so they could not have just copied what we did.

We designed and built the context for their learning to emerge … and it did.  On its own.

The ISP One Day Intensive Workshop delivered emergent learning … just as it was designed to do.

This engineering is called complex adaptive system design and this one example proves that CASD is both possible, learnable and therefore teachable.

figure_turning_a_custom_page_15415

Telling a compelling story of improvement is an essential skill for a facilitator and leader of change.

A compelling story has two essential components: cultural and technical. Otherwise known as emotional and factual.

Many of the stories that we hear are one or the other; and consequently are much less effective.


Some prefer emotive language and use stories of dismay and distress to generate an angry reaction: “That is awful we must DO something about that!”

And while emotion is the necessary fuel for action,  an angry mob usually attacks the assumed cause rather than the actual cause and can become ‘mindless’ and destructive.

Those who have observed the dangers of the angry mob opt for a more reflective, evidence-based, scientific, rational, analytical, careful, risk-avoidance approach.

And while facts are the necessary informers of decision, the analytical mind often gets stuck in the ‘paralysis of analysis’ swamp as layer upon layer of increasing complexity is exposed … more questions than answers.


So in a compelling story we need a bit of both.

We need a story that fires our emotions … and … we need a story that engages our intellect.

A bit of something for everyone.

And the key to developing this compelling-story-telling skill this is to start with something small enough to be doable in a reasonable period of time.  A short story rather than a lengthy legend.

A story, tale or fable.

Aesop’s Fables and Chaucer’s Canterbury Tales are still remembered for their timeless stories.


And here is a taste of such a story … one that has been published recently for all to read and to enjoy.

A Story of Learning Improvement Science

It is an effective blend of cultural and technical, emotional and factual … and to read the full story just follow the ‘Continue’ link.