Farrar's Faucet: A psychologist’s candid, productive and often humorous take on principled business behavior and better business outcomes.

Why your 360 appraisal shouldn’t suck!


In various places around the world there are probably more than 20,000 professionals doing some kind of 360 degree performance review that I helped their employers design.  

Obviously, I like multi-rater appraisals.  But, just like the airline employee that is told everyone’s travel woes, I am told all the time about 360 processes that suck. They shouldn’t.




When I hear 360’s aren’t going well I know it’s one of two things:  either there’s something flawed in the process, or the culture of the organization isn’t supportive of candid, constructive feedback.  This article is about what you can do to fix each of these.

A 360 Degree review is when someone gets structured feedback on their performance from the variety of sources  all around them…peers, internal and external customers, their managers, people they have worked with on projects, and their direct reports, just to name a few different groups who can be involved.  Let’s call them the 3Cs:  Customers, Colleagues and Community.  All of these have a vested interest in recognizing the person’s strengths and helping them improve their performance.

Ideally there are only two questions you need to ask:  “What does the person do well?” and “What are the person’s opportunities for improvement?”

That’s it.  Why?  Because the aim of the exercise is to find out what the person does well so we can recognize and reward good behavior, along with what the person could do better so we can help them improve to meet the reasonable expectations of the 3Cs.

Sometimes the raters find it hard to structure their feedback in a way that is specific and relevant.  It can be useful to ask more guided questions to help the feedback be more useful and easy for raters and person being evaluated.  Sometimes an organization will have a leadership model, management competencies, or specific promises that have been made to the 3Cs, (like a “customer service pledge”).  If that’s the case it makes sense to ask specifically if these are things the person is doing well, or if they are examples of things the person can usefully improve.

Basically though it will still come down to two questions: “What does the person do well?” and “What are the person’s opportunities for improvement?”

In terms of process it’s now possible to identify a few things that can make 360s suck.  If  there a bunch of questions that are onerous, intrusive or irrelevant to the raters, then the process sucks.  If it isn’t easy for the person rated to translate the feedback into actionable, meaningful performance improvements, then the process sucks.  If the person being rated is more concerned with the salary, bonus or promotion impacts of the process than the opportunity to meet the reasonable expectations of their customers, colleagues and community, then the process sucks.

The other big problem with many 360 programs is that they are embedded in a culture that doesn’t support candid, constructive feedback.  The most common reason is that they are inappropriately tied to performance and/or salary reviews, so let’s deal with this first.

If I’m designing an ideal performance review there are only two questions I would add to the two 360 review questions.  The first is “How did the person do compared to what they promised to do?”   The answer could be in the form of sales results, budget variations, project milestones or whatever.  The issue is “here’s what you said you would do, and here’s what you actually did”.  If your goals and reviews aren’t that specific they’re probably useless.

Armed with the variance between promised and actual performance you can now sit down with the person to review their results and look at consequences.  First, let’s hear what the person has to say, both about their actual versus promised performance, and about what they think they do well.  Let’s confirm what they think  where we can with the feedback from the 3Cs.  Then let’s talk about opportunities for improvement using the 360 feedback and the person’s own views.

Now it is time to move on to the second question you can add to the 360 input at performance review time: “Is this how you want your career to go, and what can we do about it?”  With input that’s objectively sourced from all around the person, and a respectful consideration of what they want to get out of their job and career, it’s relatively easy to look at what kinds of training, development activities, rewards and recognition will best suit the individual and the organization.

That sounds like a lot of work, but it isn’t and even if it is, it’s worth it.  Lack of effort is one of the biggest problems with 360 feedback or any kind of performance review.  There are few things more debilitating for an employee than knowing there is a great, big review coming up only for it to be a one off event followed by “business as usual”. 

I have seen a lot of carefully thought out review processes undone by the fact that once the feedback has been received the notes, commitments and details go into a drawer.   They don’t get looked at again for twelve months or the next anxiety producing review session.

Another big organizational killer of 360 feedback, (or any kind of performance review), is the tendency to personalize the material.  This takes two forms.  One is where the providers of feedback don’t get confidentiality.  It should always be possible to give and take candid feedback.  It should go without saying that when you solicit feedback anonymously the people providing the feedback should be confident that their input will be kept confidential and there will be no adverse consequences.  Without confidentiality where appropriate the feedback becomes self-serving.

The second kind of personalization that kills feedback is when the input from the raters focuses on the person rated rather than their performance or abilities.  Sometimes people use the review process to “get back” at the person rated, or use the opportunity to make the rated person look worse thinking it makes the rater look better.

 If the process is being run internally the solution for both of these is for someone outside the feedback loop to moderate the input.  This can be done by merging/purging the data so that it doesn’t easily carry identifying material, or by going back to the rater and asking them to be less personal and more performance focused with their comments.

This sort of process moderation is often carried out by functions such as HR or Quality Assurance.  Similarly, HR or the person’s manager should ensure that the person being rated doesn’t take the feedback personally, or disrespect the input by devaluing the people it comes from.  If an external coach or consultant is running the process this is one of their essential functions.

In summary then, here are the keys to ensuring your 360 doesn’t suck: 

•    One questionnaire to the person’s customers, colleagues and community with as few as two questions, or as many as are specific, relevant and easy for raters to handle.
•    Confidentiality for raters.
•    One review at the end of each significant time period, achievement or milestone that adds a comparison of actual and promised performance, identification of what’s going well and what can be improved, and a discussion of career and job development.

•    Feedback formatted in a way that is actionable for the person rated.
•    Developmental 360 feedback separated from salary, bonus or promotion consequences so that the focus is on…development.
•    Accountability placed on the rated person to come up with an action plan, and follow up sessions to ensure that what is promised becomes what is delivered.

A good 360 degree performance review process focuses attention on what matters most for the person to meet the performance expectations of their customers, colleagues and community, and provides a supportive environment for that to happen.  That's what performance reviews should be all about.

No comments: