There are a number of steps we need to think about when reviewing our assessment tools. The tools you are reviewing can be purchased from a commercial provider, or developed by your internal gurus within the RTO. To assist you with the review of your tools I have developed a simple list of 9 points to assist you with creating your own success:
- Do you have evidence you are assessing for the whole unit? For example have you a way of showing you are meeting the performance criteria, the critical aspects, or for the new units the Performance Evidence, Knowledge Evidence and Assessment Conditions.
This is often shown in a mapping document. They don’t have to be mapped, yet the standards require you to demonstrate that it covers the all the areas of the unit. The easiest way to do this is through the mapping. Do it to show yourself they are quality tools, not for the auditor.
- Have the specified sections of tasks been mapped? I.e. show which part of activity 1 covers the specific performance criteria.
- Are the observable behaviours mapped?
This tests the content validating. Are all the things I am putting into this tool, actually part of the unit? To mark someone against a criteria that is actually not in the unit is outside the validity.
- Is it assessed at the correct AQF level.
AQF is now part of the legislated requirement. For assessment tools, pitch them at the correct AQF level. A very common noncompliance is that tools are pitched at the wrong level.
- Have you validated with industry? Can you show that you have made use of actual workplace language, or show understanding of the industry to ensure the tool/s make sense. Simply run the assessment past someone within industry and ask them if it is appropriate to them. Ask your contact what is happening in the industry, talk to people, find out how processes work and make sure the tasks you are asking to be done can be done.
- Are there clear instructions?
Instructions for the assessor and student. Do they describe under what conditions the assessment can be carried out; onsite, offsite, how long, what happens if you don’t see all the things in that period of time, what resources are required, what support is allowable. Having instructions means the tools are reliable and fair.
- Are the observation tools appropriate?
Do the observation tools allow for observable behaviours? So often I see a cut and paste from elements and performance criteria. AND so often these are not observable behaviours.
Each item must be DIRECTLY observable. You need to see the person do it. Also each time needs to be focused on one behaviour; you can’t have two activities attached to each item.
- Are there descriptions of what constitutes satisfactory behaviour?
This covers the validity and reliability.
- Do they have clear marking guides?
Benchmarking criteria is used in quality tools for all assessors to know what are the expected responses. Yes I know assessors come from within industry, however so often there are grey areas. Considerations include; what constitutes a satisfactory response, if production is required, then what are the specific requirements, or what happens if someone only performs 7 out of the required 12 items in the observation checklist. Can the observation be done in isolation or could the requirement also be covered by answering a question? Good assessment tools have this detail in the marking guides. This provides reliability and fairness.