The effectiveness of assessing competence

by Jason Watson

How do you react to the thought of the annual performance review? Does it fill you with dread, the banality of it all, the mismatched competency framework that never really took into consideration what you or your team actually do?

I used to live in dread of those during my corporate life and the more I climbed the corporate ladder the less connected the competency assessment seemed to be with what it was I actually did for a living.

Then I became senior enough to see the value of the data to the business and I started to realize that my issue was not with the process – rather it was a disconnect between the capture mechanism and the lack of structure in the way that competency frameworks are operationalized.

Let me explain my reasoning…

Consider the answer to these questions:

Do you find current assessment techniques are?

  1. Moderately unhelpful.
  2. Neither unhelpful nor helpful.
  3. Really helpful.

Or, how about this one? Please rate how competent you are at surgery?

  1. I am a great surgeon.
  2. I am a fairly good surgeon.
  3. I am ok at surgery.
  4. I am not a great surgeon
  5. I really suck at surgery.

And finally, my personal favourite: Team managers, please rate how good your team is at the following (usually just prior to a restructure):

  • Using Excel (rate 1 through 5)? =
  • Presenting (rate 1 through 5) =
  • Customer care (rate 1 through 5) =
  • Managing others (rate 1 through 5) =

Now consider the value of the data you get back from such assessment techniques. In case you did not spot it, the first is a Likert scale joke, the second is a really bad example of assessing if the surgeon for your upcoming procedure is fit to do the job! And the third example is a very lazy way of using, guess what? Yes, a Likert scale in disguise.

For the uninitiated, in 1932 American social scientist Rensis Likert devised a psychometric scale that you will most likely be familiar with. The principle is that the scale presents a range of agreement to a posed question or scenario with responses ranging from Strongly agree, Agree, Neutral, Disagree or Strongly Disagree. The median response is to sit on the fence.

So, what’s the problem with Likert scales?

Correctly used, Likert scales are a fantastic tool, especially to measure sentiment and satisfaction, but I have an issue with their use when people are asked to rate their own ability, or when managers use them to rate their teams.

The core of the problem is entirely human. As an example, I am fairly good at what I do, however I am also an overachiever, therefore if I were a surgeon, I would tell you that I was a “fairly good surgeon” or I may even respond “that I was ok at surgery” – why? Because I need to overachieve and delight you with a better-than-expected result. In any case it’s good that I don’t practice medicine because no-one

 would allow me to perform a procedure with those responses, but what if I were the best person to perform the operation and I wasn’t chosen because of the assessment mechanism and my answers?

Even worse, consider the inverse – what if I were an underachiever feeling pressured about my position and I overrated myself?

You get the point – do not use Likert scales or psychometric tests to assess competence they are fundamentally flawed at getting you the data points that you need to make an objective assessment of capability.

Is there a better way then?

It really depends on what you want to measure about a person. Knowledge is relatively easy with a vast variety of ways to test knowledge retention. But, to use our surgeon example, would you trust a well-read medical student with a great aptitude for memory recall or the one that has successfully completed the procedure 500 times but cannot remember a single word of the theory they learned 20 years ago?

So where is all this leading?

In an ideal world I would test knowledge and observe skill preferably in a real-world context so I can judge effectiveness against the desired outcome. To make this less abstract, consider a salesperson. We can measure their knowledge retention about the solutions they sell, we can review the proposal that they plan to use to pitch to the client, and if we have been using CRM to capture the right data, we can predict the likelihood they will close the deal and even when it might close. But we still cannot objectively measure their competence in negotiating to a win-win outcome. Despite the fact that we understand all of the activities that will lead to a great negotiated outcome we still do not objectively measure those steps as part of either our performance reviews or competency frameworks. Why? Because the measurement tools are all wrong to do this job correctly, nor do we provide a framework for the beleaguered team manager to capture and manage the required data so that they can coach to better outcomes.

I have been grappling with that question my whole career and invested in new Learning Management Systems and Sales Enablement platforms to help me solve it, but the simple answer is that there was not one that even got close to what I wanted: a solution which can be used to objectively measure competence, compare the results to a desired state and link the users to learning content to help close out their individual gaps, whilst engaging their manager in the process to coach them to improve.

Plenty of people will help me write a competency model, however few will enable me to measure it without reverting to subjective measurement scales. Therein lies the root of my issue, how do I objectively measure competence to support my restructuring activity? How do I improve time-to-performance for new hires or reboard teams to align to the new GTM strategy quickly? How do I identify the best fit for my new urgent role or team from my existing talent pool given that my measurements to date have all been subjective?

Too many times have I rolled out the latest competency model only to see it “die on the vine” as the latest initiative to be ignored by managers and teams, or worse: iterated a perfectly sound model annually to try and grow adoption. I realize now I have been trying to win the race using a hobbyhorse with a busted wheel, disadvantaged at every hurdle because I wasn’t making the model relatable or relevant to the people being asked to measure against it. I was also using the wrong questions and scale.

If you recognize that competence models can add real value to your business, or you’ve invested in a great model but find it’s not yielding the expected results, or you are now recognizing you are measuring the wrong stuff, I would love to hear from you. We may even be able to show you a better way forward, as after all I think we are “pretty good” at it… see what I did there, I left room to overachieve.