Why Coaching Isn't Improving Contact Centre Performance - And What to Look at Instead
- Graeme Colville
- Mar 10
- 6 min read
You increased the coaching. You ran the one-to-ones. You had the conversations, logged the sessions, and defended the investment upward when your manager asked why the numbers hadn't moved.
And the performance gap stayed exactly where it was.
If that situation feels familiar, it's worth asking a question most performance frameworks don't make space for: what if the coaching was never going to fix it?
Not because you're doing it wrong. Not because your team isn't capable. But because the problem was never a coaching problem in the first place.
The Contradiction That Doesn't Show Up in the Data
Most contact centre leaders who are asking why coaching isn't improving contact centre performance aren't asking because they've given up on development. They're asking because they've done everything the framework asked of them and the outcome still hasn't moved.
That's a specific and important kind of frustration. It isn't disengagement. It's the result of working hard in exactly the right way - for the wrong problem.
The data makes it worse. Your outcome metrics - service levels, handle times, first contact resolution - are all pointing at the same place. They're pointing at people. The agent is the most visible moving part in every measure, so every measure looks like a people problem.
That conclusion isn't irrational. It's what the data was designed to produce.
The issue is that outcome metrics tell you a problem exists. They cannot tell you what's causing it. And when every measure points at an outcome, every problem looks like a capability gap - so you coach. And the outcome doesn't move, because the work underneath it hasn't changed.
Why Coaching Isn't Improving Contact Centre Performance: The Layer Nobody Looks At
Beneath the outcome metrics your organisation tracks sits a layer that almost never appears in a report. It's the actual process your agents navigate on every contact.
Not the documented process. The real one.
The real process includes the steps that exist because they've always existed. The approval layers introduced after an incident that was resolved years ago. The handoffs that remove agent authority at the exact point where resolution is possible. The workarounds agents have developed because the official route doesn't work consistently.
Nobody designed it to be this way. It accumulated - one reasonable decision at a time - and nobody has ever looked at it whole.
The result is a process that asks agents to wait instead of decide. That generates escalations, handoffs, and repeat contacts - not because agents are incapable, but because the system has been built, unintentionally, to produce exactly those outcomes.
Coaching cannot fix a process that removes agent authority before they can use it. It cannot shorten a timeline that depends on another team. It cannot close a loop that the system was never designed to close.
When coaching isn't improving contact centre performance, the cause is almost always in this layer - not in the capability of the people being coached.
Why the Coaching Reflex Is So Hard to Question
It's worth being direct about why this pattern is difficult to see from inside it.
Coaching is visible. It produces records, logs, and evidence of activity. When a manager asks why performance hasn't improved, the coaching data is the answer you can point to. It demonstrates effort. It shows investment in the team. It gives you something to defend.
The structural cause doesn't produce the same kind of evidence. A process that slows agents down at the point of resolution doesn't appear on a dashboard. Decision latency - the time between an agent identifying the correct resolution and being authorised to deliver it - isn't tracked in most operations. The handoff that removes ownership isn't recorded as a failure. It just becomes the next contact, the next escalation, the next coaching conversation.
The system produces the performance gap. The metrics point at the person inside the system. The leader coaches the person. The gap remains. The cycle repeats.
This isn't a failure of leadership. It's a structural trap - one that's easy to stay inside because every step within it looks like the right thing to do.
What You Find When You Look at the Process Instead of the Person
Leaders who shift from managing people toward targets to examining the system that produces outcomes typically find the same thing: the process is more complex than the documentation suggests, and the complexity is concentrated at exactly the points where performance is weakest.
Steps that no longer serve a purpose but haven't been removed. Approval requirements that exist for risk reasons that don't apply to the majority of contacts. Queues that form because the handoff point doesn't have clear ownership. Agents who have developed workarounds that work most of the time - and generate escalations the rest of the time.
None of this is visible from the coaching conversation. All of it is visible the first time you map the actual process from observation rather than documentation.
That map changes the diagnosis. And a changed diagnosis leads to a completely different intervention - one that has a realistic chance of moving the outcome metric that coaching alone couldn't shift.
Three Questions Worth Asking Before the Next Coaching Cycle
Before scheduling the next round of performance conversations, it's worth stress-testing the diagnosis with three questions:
1. Is the performance gap consistent across agents or concentrated in specific contact types?
If the gap appears across agents with different experience levels and coaching histories, the cause is more likely structural than individual. A process problem produces consistent underperformance. A capability problem produces variable underperformance.
2. What happens at the point where performance breaks down?
Follow a contact through the actual process at the moment it goes wrong. Is the agent making a decision, or waiting for one? Do they have the authority to resolve the issue, or does it transfer? The answer to those questions tells you more about the root cause than any coaching record.
3. Have you mapped the real process - not the documented one?
The documented process is how the work was designed to run. The real process is how agents actually navigate it on every contact. The gap between those two things is often where the performance problem lives. If you haven't mapped it from direct observation, you're diagnosing from incomplete information.

A Contained Starting Point
You don't need to redesign the operation to test this hypothesis. A contained observation exercise is enough to surface whether the structural layer is the real cause.
1. Recognition
Acknowledge that coaching hasn't moved the outcome metric - and question whether the root cause has been correctly identified. Effort spent on the wrong problem is not evidence of the right solution.
2. Investigation
Select one contact type where performance is consistently below target. Observe five to ten contacts of that type in real time - not through recordings, through direct observation. Note every point where the agent waits, transfers, or cannot proceed without input from another team or system.
3. Redesign
Map what you observed against the documented process. Identify the steps that exist in practice but not in documentation, and the steps that exist in documentation but produce workarounds in practice. That gap is your structural diagnosis.
4. Reinforcement
Share the observation findings with the team before drawing conclusions. Agents will recognise the structural constraints immediately - they've been navigating them every day. Their input will sharpen the diagnosis and build the trust that coaching-only conversations have been eroding.
5. Measurement
Track whether targeted process changes - removing unnecessary steps, extending agent authority, clarifying handoff ownership - move the outcome metric that coaching couldn't shift. That sequencing is your evidence.
This observation-led approach forms part of a structured intervention methodology built specifically for contact centre leaders.
The Bottom Line
Coaching is a valuable tool. It develops capability, builds confidence, and improves consistency. But it only works when the problem is a capability problem.
When coaching isn't improving contact centre performance, the most important question isn't how to coach better. It's whether you're looking at the right layer of the operation - and whether the process your agents navigate every day is designed to let them succeed.
Not sure if this is your dominant problem? The Find Your Loop diagnostic will identify it.



Comments