Operational Performance Case Study

Designing a Continuous Improvement System in a 105 Person Organization

At a newly opened Chick-fil-A location projected to generate $12 million in annual revenue, executive leadership set a goal to reach Top 20% national customer experience performance. I identified that the store’s limiting factor was not effort or onboarding quality, but the absence of a structured feedback and ownership system capable of adapting behavior as operating conditions changed.
Chick-fil-A Singing Hills
105+ Employees
$12M Revenue Location
90 Day Redesign
Baseline OSAT
76%
Overall satisfaction was materially below the target benchmark despite strong engagement and effort across the team.
Baseline Accuracy
92%
Accuracy performance showed persistent execution gaps in a high volume environment where small misses scaled quickly.
Baseline Cleanliness
84%
Cleanliness scores reflected the same broader pattern of inconsistent ownership and weak real time adaptation.

Problem Definition

The initial assumption in leadership discussions was that the store needed more training depth. The intuitive answer was to extend onboarding or increase intensity. After observing shifts over the course of a week, I concluded that this framing was incomplete.

Team members were generally operating according to what they had been taught. The real failure point was what happened after onboarding. Once recurring mistakes, new constraints, or changing operating conditions appeared, the organization had no mechanism to update behavior in real time.

Core Insight

Training was static. Operations were dynamic. The organization lacked an agile feedback loop, and that structural mismatch created performance stagnation.

Constraints

Limited Onboarding Expansion

Onboarding time could not be significantly extended without reducing floor coverage and creating new operating pressure during active shifts.

High Volume Environment

The store’s throughput demands limited tolerance for experimentation that might reduce speed or guest experience in the short term.

Authority Boundaries

As a Shift Leader, I did not operate with the same formal authority as Directors, which meant the solution had to work within existing leadership structure.

No Existing Owner

There was no formal role responsible for continuous KPI improvement after onboarding was complete.

Root Cause Analysis

Training was structured almost entirely around the first two weeks of employment. After graduation, no person or team explicitly owned ongoing performance improvement tied to metrics. Shift leaders were expected to coordinate more than thirty team members in real time while also attempting to implement loosely defined improvement ideas discussed in leadership meetings.

That combination produced reactive correction instead of structured iteration. Problems were noticed, but they were not translated into a repeatable process for diagnosis, coaching, measurement, and adjustment.

The limiting factor was ownership, not motivation and not effort.

System Design

Rather than increase training volume or impose more accountability without structural clarity, I redesigned the Team Captain role. The objective was to convert training from a one time onboarding function into a continuous improvement system embedded in day to day operations.

Raised Selection Standards

I defined minimum proficiency criteria and made role expectations more stringent so Team Captains would have the capability and credibility required for improvement work.

Clarified Accountability

The previous version of the role lacked hard ownership. I tightened scope and expectations so the function had a clear purpose tied to KPI movement.

Selected Advanced Captains Intentionally

Four advanced Team Captains were selected deliberately based on readiness for this expanded role rather than convenience or general availability.

Removed Role Ambiguity

Because selection and expectations were intentional and clearly defined, the redesign faced no pushback and corrected ambiguity that had weakened the role before.

Governance Model

I built a weekly Area of Opportunity framework tied directly to OSAT, accuracy, and cleanliness trends. This created a closed loop operating model:

Step 01

Identify Constraints

Leadership surfaced recurring friction points, mistakes, and performance bottlenecks observed during operations.

Step 02

Prioritize Based on KPI Movement

I consolidated and prioritized the most important focus areas based on actual metric trends rather than broad discussion alone.

Step 03

Track Mistake Types

Mistake type tracking was used to identify root causes with more precision, replacing vague categories with actionable patterns.

Step 04

Translate Strategy into Action

I converted business needs into structured weekly action items for each Team Captain so the work could be executed consistently on shift.

Step 05

Coach and Document During Shifts

Team Captains carried out coaching and documentation efforts in the flow of operations rather than treating improvement as a separate abstract initiative.

Step 06

Re Measure and Adjust

KPIs were reviewed weekly, and focus areas were updated based on results. The loop was data to structured action, coaching, re measurement, and adjustment.

Tradeoffs

The primary tradeoff was ownership density. In the long term, the goal was distributed continuous improvement. In the short term, the system required me to remain deeply embedded in prioritization, translation, and accountability.

This was not a set it and walk away intervention. It was a system that needed central guidance while new cultural and behavioral norms were being established.

I also chose not to extend onboarding duration despite the initial intuition to do so. That choice preserved floor coverage and avoided short term guest experience degradation, even though it meant solving the problem through structural redesign rather than more training volume.

Iteration and Adaptation

Early in the initiative, I experimented with assigning broad focus goals to Team Captains. That approach failed. Captains understood the general objectives but struggled to convert abstract goals into concrete behavioral interventions. The limited prior experience of the workforce amplified this gap.

I responded by translating high level goals into behavior specific action items and adding midweek follow up checkpoints. This made expectations operationally usable rather than conceptually correct.

Fragility in the Early System

The most fragile part of the early model was that it depended heavily on me to consolidate priorities, translate strategy into action, and enforce accountability.

That dependency was a deliberate tradeoff. I accepted central orchestration in the early phase so capability could be built before true decentralization was possible.

Results

OSAT: 76% → 86%

Overall satisfaction increased by nine points, finishing seven points above the Top 20 benchmark.

Accuracy: 92% → 95%

Order accuracy improved by three points, strengthening a critical execution metric in a high volume environment.

Cleanliness: 84% → 88%

Cleanliness rose by four points, ending nine points above benchmark and reinforcing the strength of the new ownership model.

Sustained Governance

More important than the metric lift itself, the governance structure for continuous improvement remained in place after the initial 90 day effort.

The intervention worked because it reframed the challenge from a training problem into an ownership and feedback system problem, then built a structure capable of improving behavior over time.

Limitations and Future Improvements

At Day 90, the system was not fully autonomous. It still depended heavily on my weekly prioritization and structured direction. If extended further, the next stage would focus on reducing central dependency while maintaining accountability clarity.

Formal Team Captain Curriculum

Create a more structured onboarding path for captains so the continuous improvement role can scale with less informal ramp up.

Lightweight KPI Dashboard

Improve visibility and speed of decision making by reducing the time required to interpret metric movement week to week.

Quarterly Audit Cycle

Build a recurring review mechanism to test whether standards and governance habits remain durable over time.

Gradual Decentralization

Transition more prioritization responsibility to Team Captains as capability and confidence increase.

Systems Level Observations

Ownership Gaps Drive Performance Gaps

Poor outcomes are often caused less by weak motivation and more by unclear responsibility for improvement.

Instrumentation Reduces Guesswork

Tracking mistake types and KPI movement prevents overreaction and improves root cause visibility.

Governance Beats Pressure

Sustainable systems outperform pressure based correction because they create repeatable feedback loops instead of short term urgency.

Central Guidance Comes First

Early systems often require tight orchestration before distributed ownership can work reliably.

What This Demonstrates

This case study demonstrates systems analysis, process redesign, KPI driven management, and the ability to identify structural bottlenecks behind performance problems. It shows an approach centered on diagnosing the real constraint, building clear ownership, and creating a governance model that can sustain improvement beyond a one time push.