Assessing competence

Transcript from workshop in Lima, Peru (2002), published in Training Journal (2003)

1. Overview

On the face of it, eLearning has everything going for it: study any subject wherever and whenever you choose, work at your own pace, even explore with one click material held literally on the other side of the world. Audios, videos, case studies, simulations, interactive games – they’re all there, without even leaving your desk. No more inconvenient days away from your job and home at a four star conference centre with Jacuzzi and pool . . . but that’s another debate altogether.

From an employer’s point of view, eLearning can be a highly attractive alternative to the classroom – flexible, cheaper to maintain and operate and with the ability to centrally track and measure the progress of all remote learners.  But here’s the snag – in a classroom, it’s the tutor who rates the ability of a learner in applying not only knowledge but skill and judgement to  complete a task ‘competently’. How sensitively did the learner ask the distressed colleague for an explanation in the role play? How confidently did they deal with the mock industrial accident?

These are not behaviours that are easily measured with eLearning. In fact, most computer assessments of ‘competence’ are simply quizzes which measure little more than information recall. Of course, there will always be parts of training that must be carried out on the job or in role play with opportunities for feedbacak and discussion. But if eLearning is to grow up and be taken really seriously then we must challenge the way we design and create self-study assessment tests.

To measure ‘competence’ we have to the foundation right – defining the Learning Objectives.

‘Learning Objectives’ are as old as learning itself. Study without learning objectives may well result in learning but who will know how successful it has been? To be effective, learning objectives must be:

  • achievable using the materials available (or from previous experience)
  • unambiguous and measurable
  • and their achievement

Consider paramedic training. A classroom session may have the learning objective:

To be able to rapidly locate and accurately measure the pulse of an infant’.

The training session might include the study of diagrams of the body, reading explanatory text, watching a live demonstration and practising the techniques under supervision. When the learner believes he is competent, he is assessed. Without any help from the trainer he demonstrates that he can satisfy the learning objective. Of course, in the classroom this works well since a human is assessing the learner’s performance, not an insensitive computer program.

However if the learning objective had been carelessly written it could have made it difficult or even impossible to reliably measure its achievement. An objective of ‘To be able to rapidly locate and accurately measure the pulse of an infant’ says nothing about the speed or accuracy of its execution and is just not measurable.

In a classroom the trainer can use discretion to assess learner performance but when we are designing training programs which will be tackled alone we face a tough challenge to construct learning objectives that really are measurable. Quiz questions are easy to produce . . . True/False, Multi-choice, matching items between lists, etc. but these techniques are only measuring knowledge and not the wider skills of using judgement.

So what is competence?

If we can prove that we have satisfied a learning objective then we can claim that we are ‘competent’, at least in that one topic. In order to provide examples of measurable learning objectives, consider the wide range of knowledge and skills required by staff in an organisation, from trainee to expert. Let’s say there are four broad phases of development through which employees pass. If you are to train staff with an eLearning approach then each phase requires different types of learning objective and different measurement techniques to prove competence. These are referred to later as ‘competence phases’ and are supported by examples of possible measurement techniques.

Phase 1 - Basic understanding of the organisation and their place in it When an employee joins your company, he needs to understand the structure of the organisation, how the company runs and their own place in it, the marketplace it operates in, your products/services, your competition and much more.

Training at this stage is usually by an induction course. An eLearning program would be seeking to measure basic understanding and recall of facts.

Phase 2 Acquisition of the knowledge needed for their job

The employee then starts the task of learning their job. This may call for detailed technical product knowledge, getting to know suppliers and manufacturers and soon. It’s a process that never ends. Again, an eLearning program would essentially be measuring their recall of knowledge.

Phase 3 Development of skills to perform the job effectively (applying their knowledge)

Through all our years of primary and secondary education, we are absorbing information and most of the testing we face is simply to confirm that we can recall it. But for an employee to acquire knowledge alone is of no value to an organisation – a database can do that. He must be able to use that knowledge to carry out tasks, in other words, to develop skills.

To measure how well skills have been acquired, the eLearning program needs to present tasks to be completed, rather than facts to be recalled.

Phase 4 Development of wisdom (applying the skills to maximum effect)

The highest level of performance concerns not just being very skilled at a task but in being able to use sound judgement in applying those skills. Such wisdom comes not from a training program but from long term practice and experience.

ELearning programs can provide cost-effective practice and in the examples described below, in some cases may be able to measure a learner’s judgement.

Matching competence measurement techniques with learning objectives

The scope of your learning objectives are limited by the types of measurement we can perform. PC specification, speed of connection, authoring system used, browser plug-ins, etc, . . . all of these technical factors will affect what is possible.

a) Measuring Knowledge

This is the most straightforward to measure and eLearning tests typically include questions such as:

  • True/False and Yes/No
  • Multi-choice (one or several selections)
  • Match items in one list with items in another
  • Clickable images (selecting one or more ‘hot spots’)
  • Drag words/phrases to fill spaces in a text passage
  • Typed entry (free text words or phrases – great care is needed to accommodate mis-spelling and alternative terminology)

measurable learning objectives might look like this:

“List the key advantages of the model T4000 generator”
“Identify the six major isolation switches in an electrical sub-station”
“Name our five major UK competitors”
“Match customer complaints with the most appropriate action”

b) Measuring Skill

Many tasks require a degree of manual dexterity in handling components or equipment in order to prove competence. For example, the rapid cleaning and assembly of a rifle or the correct insertion of a drip into a patient. ‘Soft skills’ such as interviewing techniques require face-to-face practice and assessment. For these tasks, ‘blended learning’ recognises the fact that eLearning has its limits and will often be supported by traditional training sessions and workshops.

However there are techniques where eLearning allows the learner to practise performing tasks and for the program to track and measure their actions. A ‘task’ may be to carry out research and use various resources provided (such as referring to an electronic service manual or taking measurements) before taking the action they feel is most appropriate.

Designing such a measurable task requires far more effort than simply testing for knowledge. However, the completion of the task can itself be a learning activity provided that feedback and assistance are available within the program if the learner gets into difficulty or needs guidance. So the investment has a dual role: training and competence assessment.

Consider this example: A company provides roadside breakdown assistance to motorists and employs a large number of mobile service engineers. New staff need to be trained at their local depot using eLearning programs and they must pass an assessment of their fault diagnosis skills before going on the road.

A Service Manager’s ideal learning objective would be that the learner must ‘demonstrate that they can locate the cause of an engine misfire as quickly and as economically as possible’. The terms ‘quick’ and ‘economical’ mean nothing as far as a program is concerned so these must be specified for each task. An obvious electrical fault (a loose wire for example) may not take a competent engineer more than 10 minutes to locate and fix with no need for spare parts; conversely, an obscure problem in the engine management system might take an expert 45 minutes at a cost of £200 in parts. Some testing is required in order to ‘calibrate’ the training program with valid averages of time and cost for each task posed.

This type of task can be programmed using an interactive view under the bonnet of a vehicle, using a technology called ‘QuickTimeVR’. The task facing the learner is to find out which component is causing an engine mis-fire. The learner may ‘zoom’ into the image of the engine and move their view freely around. Clickable ‘hot spots’ are placed on components such as spark plugs, HT leads, pump, generator, etc.

This exercise tracks the actual roadside task time spent (eg: replacing a spark plug adds 8 minutes to the elapsed time for the job) as well as the cumulative cost of any parts used during fault-finding. The learner may select items, examine them visually, test them and replace them. Every action is logged to the progress database. The total cost of parts replaced is also shown.

A learner who adopts a ’replace it anyway’ attitude may fix the fault in a few minutes but at huge cost, destroying the profit margin on the job. So we can start to measure the quality of a learner’s performance, not just if they are ‘right’ or ‘wrong’.

Permanently displayed Main Menu showing key Sections

Figure 1. Permanently displayed Main Menu showing key Sections

Figure 1 – Interactive Fault-finding task

Whilst principally being a measured assessment of skill (knowledge + judgement),

this program could equally well serve as a training module. With the addition of a ‘Help me’ facility to give advice and prompts and interactive feedback, the exercise could be used to let learners ‘play’ safely and cost-effectively, see the outcomes of their decisions, restart the exercise (or a new one) and try again. When they feel they are ready to be measured, they run the exercise in ‘measurement mode’ and this time their actions are recorded and no assistance is given.

Tracking and reviewing task results

The data recorded for each test session may be passed to a learning management system (LMS) for review and comparison. Over time, a rich set of performance data will be accumulated which gives real insight into the skill level of staff. The data stored might look like this:

Permanently displayed Main Menu showing key Sections

Figure 1. Permanently displayed Main Menu showing key Sections

The decision on whether a learner has achieved competence is made automatically by the program. For example, competence is reached if their best task attempt (maximum of 3 attempts allowed) was completed in under 20 minutes at a cost of no more than £25.

So, in this example four individuals achieved competence. Student ‘Cray’ was fast but spent £176.00 whereas his more cautious colleague ‘Vannu’ spent the minimum in reaching the correct conclusion but took almost an hour doing it. Both failed to achieve the learning objective.

There is no limit to the data which may be recorded for such exercises. For example, the sequence in which components were examined and replaced, the elapsed time between actions, etc. We could even record and playback their session, showing every step they took.

This approach enables the Service Manager set a measurable learning objective for what, on the face of it, may have seemed an illusive set of skills. Similar techniques could be devised for other types of skill training, practice and assessment, including the use of multimedia to enrich the experience and add even greater reality. The learner could actually listen to the engine mis-fire, as they would on a real call-out.

And after they had wasted 45 minutes changing the wrong components, why not the voice of an angry customer demanding to know what’s going on?

c) Measuring Wisdom and Judgement

Our final category of ‘competence’ is concerned with the wisdom that an employee brings to a task. Given two employees having identical levels of skill and knowledge, one will outperform the other by using superior judgement. We would normally assess such judgement in the workplace or in role-playing training sessions. The question is: can we also create measurable learning objectives that assess judgement using eLearning? Suppose that you operate a chain of brake and tyre replacement centres. You want to implement self-study training (eLearning) for trainee Area Managers to cut out the cost and wasted time of travelling to a central training centre. You put together an eLearning program rich in content . . . audio and video interviews, case studies, company rules, etc. But how do you measure their competence, at a distance? Consider the following as a possible electronic assessment of judgement for this job.

You present the individual with a ‘So what would you do?’ challenge which they must deal with. The program presents a scenario and a range of possible choices, some appropriate and some not. You will have set in advance in the marking system what a ‘model’ response would be from an experienced Area Manager. The individual will earn marks for choosing a correct action in its correct sequence. They will lose points for selecting inappropriate actions and for repeatedly referring to the resources.

Here’s the scenario: “It’s 9.15 on your first morning as Area Manager for FlexiProd Ltd. There’s a telephone message from a worried Gary James, supervisor at the Southampton centre. A customer had two new tyres fitted on Saturday and last night a tyre burst causing her to swerve into a tree. Her husband, a journalist, has three cracked ribs and the car is a write-off. She is furious and has contacted the police, her solicitor and her insurance company. There is also a reporter and camera crew from TV SouthEast in reception asking for a live interview with you. So what would you do?”

Resources

The program provides pieces of ‘evidence’ which the individual can review. These could include the frantic call from the depot supervisor, a report on the tyre and a photo of the damage, the company’s policy on dealing with the press, etc.