Unfortunately, however, PDSA can be deceptively tricky to execute properly. On the surface, it looks simple; "it's just a 4-step routine." But one needs a scientific mentality to perform PDSA properly, and most folks are not trained to think about process problems scientifically.
Based on my experience from the front-lines of process improvement in healthcare, here are the top 3 most common PDSA mistakes I see:
3) Not stating a hypothesis in the Plan phase. Maybe the word 'hypothesis' throws people off, or maybe people don't want to admit that they are just making a prediction i.e. aren't sure that their idea will work, but it's extremely common to see people completely omit any sort of prediction or theory from the Plan phase of their PDSA. When I coach people on PDSA, I usually avoid the word 'hypothesis' and ask them to write a "If we do X, then we predict Y" statement instead.
2) Not being impartial about results in the Study phase. If results are better than expected, we are jubilant instead of curious. If results are worse than expected, we get discouraged and sweep it all under the rug, or we cherry-pick any and every positive indicator we can plausibly use. These behaviors are symptomatic of a non-scientific approach at best, and a punitive organizational culture at worst. When I'm coaching, I emphasize that any good PDSA cycle has value in the form of the learning it generates, independent of whatever the results were for that cycle.
1) Not properly understanding the problem prior to PDSA. PDSA is a great approach for testing ideas for improvement, but not every idea should be tested. We need a set-up phase prior to PDSA that helps us define the problem on the surface, dig down to root causes, and use the insights we gain to develop good ideas for improvement that can be tested. So often, we get so excited to implement change that we hurry past this set-up phase, which increases the risk of selecting the wrong idea to test. This is okay, in that PDSA will reveal that it's the wrong idea, but we only have so much capacity for testing, so we need to be smart about what ideas we select. When coaching, I try to challenge our understanding of the problem, but in the end, I usually show a bias for action.