for mysiteI’ve been developing and conducting training classes for years – never entire curricula, but individual classes like security awareness.  In general I’ve been pretty successful, and I haven’t found it that difficult: explain the topic in an organized way, explain why certain things are they way they are, give some concrete examples, and most people get it.

Then I got the first dogs of my adult life, and learned to train them.  In many ways, training dogs is much more difficult than training people because there is no common language and dogs and people perceive the world in very different ways.  Now, before anyone gets offended, I’m not trying to compare people to dogs.  I am, however, trying to compare training methods – there are some interesting differences and similarities that are very educational, and training either species can have unintended consequences.

One of the most popular methods of training any species of animal is called clicker training.  A clicker is just a small plastic thing that makes a clicking noise.  You associate that noise with a treat, and the animal (in this case a dog) learns that the noise means something good is about to happen.  When the dog performs a desired behavior (like sit), you click at the moment that it performs, and follow up with a treat.  Because of the precision of clicking just when the behavior happens, the dog is clear on what you want, and learns a lot faster.  In fact, most dogs figure it out pretty quickly and will start to “offer” the behavior in the hopes of more treats.  This method is also used successfully with human athletes that have to do complex aerial moves like gymnasts and divers, to help them understand when to start or end a tuck or a twist.  The key message here is that immediate positive recognition for doing the right thing is the fastest way to ingrain a behavior – in any species.

The more interesting side of dog training is the unintended consequences.  Unlike with humans, you can’t just explain to a dog what you’re after.  You have to figure out how to guide (“lure”) the dog into doing what you want, but even then it might not understand.  If it doesn’t, you have to wait around and let it do the behavior by itself, and “capture” the behavior by clicking and treating when it happens.  The problem with luring and capturing is that sometimes you reward things that you didn’t mean to reward – thus the unintended consequences.  Here’s an example with my husband’s dog, Kozmo. We rented a house last year that was down the street from a school.  Kozmo decided it was a good idea to get up at 7am, run into the yard, and start barking at the kids walking by.  So every morning for about a week I got up when I heard him, went out with him, called him in when he started barking, and then went to the kitchen for a treat.  By the end of the week, he stopped barking outside.  But then he started doing something new.  Every once in a while, he’d get my attention, and walk toward the dog door, ensuring that I was still watching.  Then he’d rush outside, bark a couple times, rush back in, and go sit in the kitchen and stare at the treat cabinet.  In short, I was trying to teach him “don’t go outside and bark” but he learned “If I go outside and bark when mom’s around and immediately come back in, I get food and attention.”  To this day if he wants attention when we’re around, he’ll go outside and bark a few times, then come back into the house, expecting praise.

So what’s my point in all of this?  When we collect metrics in the customer services space and use them for performance assessments, we are effectively training our employees – if you score well on the metrics, you get a raise.  If you score poorly, you could get fired.  But measuring the wrong things can have unintended consequences – we think we’re rewarding delivering good service, but we’re actually rewarding behaviors that deteriorate service.  A very common example is when we measure speed of service instead of quality of service.  Speed is much easier to measure than quality, and it’s something that can be system generated: how many tickets closed per week, how many minutes spent on each call, etc.  On the surface, it also makes sense: if we’re closing calls and tickets faster, we’re completing more calls and tickets sooner, so the customers aren’t waiting around for service, and that’s good!  But what actually happens?  If an employee gets a gold star for being the fastest, that individual will do his best to continue doing so – at the expense of the customer.  The ticket will get closed with the work not being completed, or the call will end and the customer still hasn’t received the help they needed, or they’ve been passed along to someone else – wasting both the customer’s time and the time of the person they were passed to.  Meanwhile, the employee is getting rewarded for having been the fastest.  Measuring speed without measuring the underlying quality, has the unintended consequence of deteriorating service, when the intent is to improve service.

How do you measure quality in ways that reward good service?  More on that later…

About the Author Ioana Bazavan Justus

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Don't know where to start?

Check out Security Catalyst Office Hours to meet your peers and celebrate the good, help each other, and figure out your best next step. We meet each Friday… and it’s free to attend.