Sunday, February 23, 2014

The Big Batch Theory

Don't you love those moments when you make a connection between ideas that on the surface have nothing to do with one another?    I hate the term "aha moment" but it is appropriate in this case, and it does make me think of this hilarious Eddie Murphy moment (language slightly NSFW):


Okay, so anyway, there's an aha moment insight to be gained from analyzing the batch sizes of different types of work we do and discussing how lean principles such as pull, just-in-time, one-by-one flow, etc. can help us improve our work.  The following paragraphs will dive into three different work scenarios:  1) Big-Batch Production, 2) Learning in Big Batches), and 3) Big-Batch Root Cause Analysis.  Here goes...

1) Big-Batch Production

aka the "Yeah-I-Learned-That-In-Lean 101" example


This is the good old-fashioned example of the Welding Dept. in Ohio producing 10,000 widgets in one huge batch and shipping them to the Assembly Dept. in Michigan, who finds that the whole batch is defective because of one inaccurate measurement or whatever.

The well-understood principle at play here is that we shouldn't push big batches because it obfuscates the connection between customer demand and production.  It's the waste of overproduction, which in turn leads to myriad other wastes:  whole batches of undetected defects, out-of-control inventory, etc.  Instead of this big-batch push approach, we should strive for a target condition whereby downstream processes can pull from upstream processes in pursuit of a True North of perfect one-by-one just-in-time flow that perfectly matches customer demand with production supply.

Those ideas are part of the lean canon; no need to rehash them further.  However, there are so many other scenarios to which we could apply the same logic and benefit from the same lessons learned.  Unfortunately, based on my observations in healthcare anyway, we frequently fail to apply these lean principles to other types of work.  That's what I'll explore in the next two examples...

2) Learning in Big Batches

aka the "Intuitive-When-You-Think-About-It" example


So, applying the aforementioned lean principles of "pull, don't push" or "match supply with customer demand" and so forth, we can analyze our approach to learning.  Specifically, we can look at how we PI specialists teach Lean to the people in our organizations.

The Challenge...

I've been guilty pretty much my whole career of producing large batches of lean training.  I think just about every PI specialist has designed and or delivered all-day or all-week workshops where we cover a broad range of lean tools and principles.  It's pretty much de rigueur in the healthcare world to send Green Belt candidates off for weeks of training at a time.

This "big-batch learning" approach can be quite effective at building awareness of and excitement for the lean approach in general, but I've seen little evidence in my personal practice of this leading to a change in daily habits.  And just to be clear, my belief is that if daily habits don't change then it's extremely difficult to create a culture of continuous improvement.

But why does the big-batch learning approach not lead to habit-building?  It's the same fundamental principle at-play from the previous manufacturing example:  we're obfuscating the connection between supply & demand by pushing tools and principles via classroom training.  This pushing/overproduction leads to all sorts of learning waste:  defects (not remembering how to perform a certain technique because there was too much lag time from exposure to first real-world use), excessive inventory (tools sitting on our mental shelf that may never be used), over-processing (e.g. using five types of graphs when one would do just fine), and on and on, ad nauseum.

The Idea...

Just as in the previous example, the countermeasure here is to establish a target condition of letting the learner pull* lean tools and principles in the course of their improvement work, in pursuit of a True North of just-in-time learning that perfectly matches the supply of learning with the demands of the situation.

* Actually, it's the gap between the target condition and current condition that is doing the pulling.  The learner may not have the wherewithal to know when or what to pull.  This is where having coaches with plenty of coaching cycles under their belt is critical, as they have the pattern recognition ability needed to be the "voice of the gap" so to speak.

It's interesting to think about why we so frequently resort to big-batch learning instead of the just-in-time approach.  I think a root cause might be that most of the time our PI folks don't have a systematic mechanism for delivering just-in-time learning the way we do for big-batch learning (i.e. classrooms, trainee rosters, syllabi, PowerPoint slides, group exercises, simulations, etc.).  Of course, the Toyota Kata system provides a highly-effective mechanism for just-in-time learning, but there are significant barriers that prevent us from adopting this approach universally.*

*Hint:  it seems to come down to whether senior leadership can accept that the future is unknown and unknowable, and that certainty can only come from building strong, repetitive habits that allow us to cope with whatever change comes our way (Carol Dweck's book can provide more explanation).

3) Big-Batch Root Cause Analysis

aka the "Somewhat-Controversial-But-Profound" example


The same big-batch/small-batch, push/pull, supply/demand concepts from the previous two examples apply to the work of root cause analysis (RCA).  Just to state the obvious, as any PI specialist worth their salt knows, we should always strive to properly define problems and identify root causes prior to implementing countermeasures.  This is another sacred element of the lean canon that requires little explanation to PI specialists.

However, the way we go about performing RCA can vary widely.  We have an array of RCA tools at our disposal in several categories:  statistical (linear regression, ANOVA, etc.), practical (5-Why, Ishikawa diagrams, etc.), or empirical (hypothesis testing via PDSA).  We can use any combination of these tools to identify the root causes of our problem, which gives rise to a significant amount of variation in the way RCA is done in the world.

Some interesting discussion regarding various approaches to RCA has been occurring recently on LinkedIn, including comments from the great Jeff Liker.  Here's the link to the discussion thread (you may need to be a member of the group).

The Challenge...

What this discussion thread and Professor Liker's coaching has forced me to do is think about how the RCA approach we select (the combination of statistical, practical, and empirical techniques we utilize) impacts the batch size of the RCA work.  The challenge for us is to figure out what batch size of RCA work will yield the best results for us.  To examine this question, let's look at two extreme batch sizes:

An excessively big-batch approach to RCA work might look like this:  we get a bunch of folks together for an all-day workshop during which we use the wisdom of the crowd to map the current-state process, identify opportunities for improvement, define the problems in a discrete way, and use a combination of statistical and practical RCA techniques to break those problems down to root causes.  At that point, we're ready to hand off that big batch of work from the Plan phase to the Do phase of PDSA.

This approach has the advantage of being efficient from a facilitation standpoint, but all the disadvantages of big-batch over-production as discussed above (but especially defects in the form of incorrect identification of root causes due to faulty assumptions, group-think, etc.).

An excessively small-batch approach to RCA work might look like this:  after engaging the team to do a small bit of current-state analysis, we identify a few potential root causes.  We then select the one we think is the most likely culprit and start testing countermeasures using mini-cycles of PDSA.  If the countermeasure isn't effective at removing our hypothesized root cause, then we try other countermeasures one-by-one.  If we find that a countermeasure is effective at removing our hypothesized root cause, but has no positive impact on the problem at-hand, then we select other potential root causes one-by-one.  Lather, rinse, repeat.

This approach has all the advantages of one-by-one just-in-time production as discussed above, but the significant disadvantage of being a nearly unmanageable process in the real-world due to the myriad factors that can distort, taint, delay, or otherwise invalidate our supposedly scientific experiments.

This is clearly a complicated challenge with no single answer.

The Idea...

It feels as if we will need to find the sweet-spot between these two approaches to be able to mitigate the risks of big-batch work while coping with imperfections of real-world testing.  I think in the current condition, the majority of PI specialists tend to error on the side of big-batch RCA work as described a few paragraphs ago.  Those of us practicing the Toyota Kata method in a strict way probably error on the other side.  Let's find the sweet spot in the middle and move forward, shall we?

Wrap-up


Whether it's our process for producing widgets, teaching Lean, or performing a root cause analysis, we can benefit from understanding how the concepts of pull, just-in-time, one-by-one flow, etc. impact the waste level of our system.  In healthcare, we PI specialists tend to be good at understanding these concepts in the context of the typical clinical process (e.g. a nurse pulling meds from a Pyxis machine, triggering a replenishment from the Pharmacy, etc.), but not so good at applying them to our own work processes.  A true lean thinker is consistent in applying lean concepts to any process.

I shudder to think of how many of my own work processes are poorly aligned with lean principles. Yes, the learning continues.