About six months ago I had a meeting with a call center customer. The meeting was with the Customer Services Director and a consultant from a benchmarking company. The purpose of the meeting was to understand what metrics Spectrum should capture and display on the agents desktop wallboard.
The benchmarking consultant stated that the desktop wallboard should contain customer satisfaction ratings, AHT, service level, abandon calls as a percent, ASA and Calls Waiting (inqueue). The consultant said the agent will perform and meet the SLA’s if they see these metrics on the desktop. First Call Resolution information is not available and could not be displayed.
From my point of view it did not matter to me what the customer wanted to capture and display, I just wanted a happy customer that is using our product. However, I was concerned about the amount of data to be displayed (because of desktop real estate) and the agent training that would be needed to explain the meaning of the metrics.
The customer decided to do a test. We had two types of desktop wallboards designed with different content and with agents of various skill levels. Later we found that more desktop designs were needed. The customer took a baseline measurement and then again four weeks later. The testing, set up and results are too much to cover in this blog but the final results are important to mention.
- New agents did best with a minimum amount of data: Calls in queue and oldest call waiting.
- Mid level agents performed best with four metrics: Calls in queue, ASA, Oldest call waiting and Service Level. Only a few of these agents wanted to see more and different metrics.
- Highly skilled agents improved most when they had performance metrics on their desktops. These metrics included: AHT, Occupancy, Abandon Rate, Service Level, and Calls Waiting. Some of these agents were actively asking for more data that would help them do a better job.
This customer is continuing to test the desktop content to find out what works best for their call center and agents. My takeaways from this test were:
1. Not all agents should see the same content – duh!
2. Give your agents the information that will allow them to perform best for their skill level.
3. As your agents improve, change the desktop content to match their skills.
4. Some agents want the tools to work for them. Listen to these agents and accommodate the changes when it makes sense.
This test was by no means perfect. There were flaws in the way things were set up, measured, and tested. But you cannot argue with the successes that this call center experienced.