Methodology#24

The PDCA Cycle: Why One-Shot Improvements Always Fail

Monday, March 30, 2026

Here's a pattern that plays out in organizations every day: a team identifies a problem, brainstorms a solution, implements it, and moves on. Three months later, the problem is back — sometimes worse than before. The team is confused. "We fixed that." No, you didn't. You attempted a fix. You never verified it worked. You never standardized it. And you never adapted when conditions changed.

This is the failure mode of one-shot improvement. It treats process change as a single event instead of what it actually is: a cycle. The methodology that corrects this failure mode is PDCA — Plan-Do-Check-Act — and it is arguably the most important framework in all of process improvement. Not because it's complex. Because it's complete.

The Origin Story

PDCA traces back to Walter Shewhart, the father of statistical quality control, who proposed a linear sequence of specification, production, and inspection in the 1930s. W. Edwards Deming, Shewhart's protégé, refined and popularized the concept into a cycle — emphasizing that improvement is never finished, only iterated.

Deming brought this thinking to post-war Japan, where it became foundational to the Toyota Production System and the broader quality revolution that transformed Japanese manufacturing. The Japanese embraced the cycle so thoroughly that "PDCA" entered everyday business vocabulary. Managers didn't just use it for quality projects — they used it for everything. Meeting agendas. Training programs. Strategy deployment. The cycle became a way of thinking, not just a methodology.

Deming himself later preferred the term PDSA (Plan-Do-Study-Act), arguing that "Study" better captured the intent of the third step — deep analysis and learning, not just superficial checking. The distinction matters, as we'll explore below. But whether you call it PDCA or PDSA, the core logic is identical: systematic, iterative improvement driven by evidence.

The Four Steps

Plan

This is where most improvement efforts fail — not because teams skip planning, but because they plan the wrong things. Effective planning in the PDCA context means:

Define the problem precisely. Not "quality is bad" but "the defect rate on assembly line 3 has increased from 1.2% to 3.4% over the past six weeks, primarily driven by misaligned brackets on model X." Vague problems produce vague solutions. Use data. Be specific about what's happening, where, when, and how much.

Analyze root causes. Before proposing solutions, understand why the problem exists. This is where tools like the Five Whys, fishbone diagrams, and Pareto analysis come in. The temptation to jump straight to solutions is overwhelming — resist it. A well-diagnosed problem is half-solved. A poorly diagnosed problem generates solutions that address symptoms while the root cause continues operating underneath.

Develop a hypothesis. Frame your proposed change as a testable prediction: "If we replace the bracket jig with a redesigned version that includes a positive stop, we predict the misalignment rate will decrease from 3.4% to below 1.5% within two weeks." This isn't bureaucratic overhead — it's intellectual honesty. You're admitting you don't know for certain that your solution will work. You're setting up a test.

Design the test. How will you implement the change? On what scale? For how long? What data will you collect? What constitutes success or failure? Plan the measurement before you make the change, not after. Deciding what "good" looks like after seeing the results is not analysis — it's rationalization.

Do

Implement the plan — but with a critical caveat: start small. PDCA is not a framework for organization-wide transformation in a single step. It's a framework for disciplined experimentation. Run a pilot. Test on one line, one shift, one team, one location. Contain the blast radius.

During the Do phase:

  • Execute the plan as designed. Don't improvise mid-stream — that contaminates the experiment. If you realize the plan has a flaw, note it, complete the cycle, and fix it in the next iteration.
  • Collect data as specified in the Plan phase. This is non-negotiable. Without data, the Check phase is just opinion.
  • Document what actually happened versus what was planned. Did the implementation go smoothly? Were there unexpected obstacles? Did people follow the new procedure, or did they revert to old habits? The gap between the plan and reality is itself valuable information.

The Do phase is deliberately limited in scope. You're not rolling out a permanent change — you're running a test. This distinction matters psychologically: people resist permanent changes but accept experiments. "Let's try this for two weeks and see what the data shows" gets far less pushback than "This is the new way we do things."

Check (or Study)

This is the step that separates PDCA from "try stuff and hope." After the Do phase, you stop and analyze:

Did the change produce the predicted result? Compare actual outcomes against the hypothesis from the Plan phase. If you predicted the defect rate would drop below 1.5% and it dropped to 0.8%, that's a win — but understand why it exceeded expectations. If it dropped to 2.9%, that's progress but not success — understand what limited the improvement. If it went up to 4.1%, that's critical information — something about your root cause analysis was wrong.

What did you learn? This is why Deming preferred "Study" over "Check." Checking implies a binary pass/fail. Studying implies understanding. Why did the results come out the way they did? Were there side effects — positive or negative — that you didn't anticipate? Did the change affect other processes, other metrics, other teams?

Was the data sufficient? Did you collect enough data over a long enough period to draw confident conclusions? A two-day test on a process with weekly variation patterns tells you almost nothing. Be honest about statistical significance. Control charts — which track process variation over time — are invaluable here. Two data points that look like improvement might just be normal variation.

The Check phase is where organizational learning happens. A failed experiment that generates genuine understanding is more valuable than a successful change that nobody understands. If you know why something worked, you can apply that knowledge elsewhere. If you don't know why it worked, you can't replicate it, and you can't tell when conditions change enough to invalidate it.

Act

Based on what you learned in the Check phase, take one of three actions:

Adopt. If the change worked as predicted and you understand why, standardize it. Write it into the procedure. Train everyone. Update the documentation. Make it the new baseline. But "standardize" doesn't mean "set and forget" — it means this becomes the starting point for the next improvement cycle.

Adapt. If the change partially worked or produced unexpected results, modify the approach and run another cycle. Maybe the jig redesign helped but the alignment issue has a second root cause you didn't identify. Adjust the plan, run another test. PDCA is iterative by design — partial success isn't failure, it's progress.

Abandon. If the change didn't work and the Check phase revealed that the underlying hypothesis was wrong, stop. Don't throw more resources at a flawed approach. Go back to the Plan phase with your new understanding of the problem and develop a different hypothesis. This isn't failure — it's the scientific method working as intended.

Why Organizations Get PDCA Wrong

PDCA looks simple on paper. Four steps, a circle, repeat. Yet most organizations that claim to use PDCA actually practice "PD" — they plan and do, but never check, and therefore never meaningfully act. Here's why:

The urgency bias. There's always another problem waiting. Once a change is implemented, the pressure to move on to the next issue is intense. Checking takes time, and that time feels unproductive compared to fixing the next thing. But skipping the Check phase means you never know if your changes actually worked, and you accumulate a portfolio of unverified "improvements" — some of which are actively making things worse.

The confirmation bias. When you've invested effort in a solution, you want it to work. So you interpret ambiguous data favorably. The defect rate dropped from 3.4% to 3.1% — "see, it's working!" Maybe. Or maybe that's normal variation and your change did nothing. Without rigorous analysis and sufficient data, you can't tell the difference.

The standardization gap. Even when changes work, organizations often fail to standardize them. The improved process lives in the heads of the people who were involved in the pilot. When they transfer, take vacation, or leave, the improvement evaporates. Standardization — documenting, training, integrating into management systems — is unglamorous work, and it gets skipped.

The single-loop trap. Some organizations do complete the full PDCA cycle but treat it as a one-time event. They go around the circle once and stop. The power of PDCA is in repetition — each cycle builds on the previous one, creating compounding improvement over time. One cycle might yield a 15% improvement. Five cycles on the same process might yield 60%. But only if you keep cycling.

PDCA at Different Scales

One of PDCA's strengths is its fractal nature — it works at every organizational scale:

Individual task level. A machinist adjusts cutting parameters (Plan), runs a part (Do), measures the result (Check), and decides whether to keep the new settings or try different ones (Act). This micro-PDCA happens dozens of times per day and is the foundation of craft skill development.

Project level. A process improvement team identifies a bottleneck (Plan), implements a solution (Do), measures throughput impact over four weeks (Check), and decides whether to standardize, modify, or abandon the approach (Act). This is the most common application of PDCA.

Strategic level. An organization sets annual improvement targets (Plan), deploys resources and initiatives (Do), reviews quarterly results against targets (Check), and adjusts strategy for the next period (Act). This is sometimes called "hoshin kanri" or strategy deployment, and it's PDCA applied to the organization's direction, not just its operations.

Nested cycles. Strategic PDCA contains project-level PDCA, which contains task-level PDCA. The quarterly strategy review (Check at the strategic level) examines the outcomes of dozens of project-level PDCA cycles. Each project-level cycle was informed by hundreds of task-level cycles. This nesting creates a coherent improvement system where tactical learning feeds strategic decisions and strategic direction guides tactical experimentation.

PDCA and Simulation

PDCA and simulation are natural partners, and the connection runs deeper than most people realize:

Supercharging the Plan phase. The biggest risk in traditional PDCA is that you plan a change, implement it, and discover through the Check phase that it didn't work — after spending weeks and disrupting operations. Simulation compresses the Plan phase by letting you test hypotheses virtually. Build a model of your process, simulate the proposed change, and examine the predicted impact before touching anything in the real world. You're not eliminating the Do-Check-Act phases — you're making the Plan phase dramatically more effective by arriving at the Do phase with a change that simulation has already validated.

Testing multiple alternatives. Without simulation, practical constraints limit you to testing one or two alternatives per PDCA cycle. Simulation removes that constraint. You can test ten different configurations in an afternoon, compare their predicted outcomes, and select the most promising one for real-world piloting. This accelerates the overall improvement trajectory by eliminating dead-end approaches before they consume real resources.

Understanding system effects. Process changes rarely affect only the target process. Improving throughput at one station might create a bottleneck downstream. Reducing batch sizes might increase setup frequency. These system-level interactions are difficult to predict intuitively but straightforward to model in simulation. The Check phase becomes more insightful when you can compare actual system behavior against simulation predictions and understand the discrepancies.

Building the case for change. PDCA's Plan phase often includes convincing stakeholders to approve the experiment. Simulation provides the evidence: "Our model predicts this change will reduce lead time by 22% with no additional resources. We'd like to run a four-week pilot to verify." That's a much easier conversation than "We think this might help — can we try it?"

A Practical Example

Consider a hospital emergency department struggling with long patient wait times. Here's how PDCA — enhanced with simulation — might work:

Plan: Data shows average wait time is 47 minutes, with peak waits exceeding 90 minutes between 2 PM and 8 PM. Root cause analysis reveals that the bottleneck is physician availability during shift transitions. The hypothesis: staggering physician shift start times (instead of having all physicians change over simultaneously) will reduce peak wait times by 30%. A simulation model tests various staggering schedules and predicts that a 2-hour offset between half the physicians reduces peak wait time by 34%.

Do: The staggered schedule is piloted for four weeks. Wait time data is collected continuously. Staff satisfaction surveys are administered weekly. Physician handoff quality is tracked to ensure the new schedule doesn't create communication gaps.

Check: Analysis shows peak wait times decreased from 92 minutes to 58 minutes — a 37% reduction, slightly better than predicted. However, early morning wait times increased by 8 minutes because fewer physicians are available at the new early-start time. Staff satisfaction is neutral — some prefer the new schedule, others don't.

Act: The staggered schedule is adopted as standard. The early morning wait time increase is noted and becomes the focus of the next PDCA cycle — perhaps adding a part-time physician for the first two hours of the day. The simulation model is updated with actual data to improve its accuracy for future experiments.

Then the cycle repeats. And repeats. And repeats. Each time, the process gets a little better, the model gets a little more accurate, and the team gets a little more skilled at systematic improvement.

Getting Started with PDCA

If your organization doesn't currently practice PDCA, here's how to begin:

Start with a real problem. Don't do a "practice PDCA" on a trivial issue. Pick something that matters — a quality problem, a delivery issue, a cost driver. The methodology only demonstrates its value when applied to real stakes.

Keep the first cycle small. A two-week pilot on one process, with clear data collection and a scheduled review. The goal isn't to solve everything — it's to demonstrate the discipline of completing the full cycle.

Document everything. The Plan (including hypothesis). The Do (what actually happened). The Check (data, analysis, conclusions). The Act (what you decided and why). This documentation is how organizational learning accumulates. Without it, insights are lost when people move on.

Resist the urge to skip Check. When the Do phase is done, you'll feel the pull of the next problem. Fight it. The Check phase is where you learn whether your mental model of the process matches reality. That learning is more valuable than any individual improvement.

Celebrate learning, not just results. A PDCA cycle that disproves your hypothesis is not a failure — it's a cycle that prevented you from standardizing a change that doesn't work. That has enormous value. Punish failed experiments and people will stop experimenting. Celebrate the learning and people will keep improving.

The PDCA cycle isn't a technique for experts. It's a thinking discipline for everyone. Plan what you'll change and why. Do it on a small scale. Check whether it actually worked. Act on what you learned. Then do it again. It's the simplest framework in process improvement, and when practiced faithfully, it's the most powerful. The circle never ends — and that's the point.

PDCA works best when you can test changes before committing. ProcessModel lets you simulate your process, experiment with improvements in a risk-free environment, and validate results — turning the 'Plan' step from guesswork into evidence.

Plan Smarter with Simulation