Concepts of 'feedback literacy' and 'evaluative judgement' can support assessment for learning strategies which actively seek to drive as well as measure student learning. Criteria and exemplars are used to support students to utilise feedback to improve, and to develop their own ability to judge the quality of their work.
At a Departmental and programme level, there is a requirement for the development and sharing of explicit criteria for marking and moderation to facilitate a clear and consistent approach to standards and grading for all assessments and reassessments (in line with the University’s principles of equity, openness, clarity and consistency). This is dealt with in section 14 of the Guide to Assessment, Standards, Marking and Feedback.
Beyond the goals of certification and grading, however, there is a need to acknowledge the ‘dual duty’ performed by assessment (Boud, 2000) and of the major impact it has on learning, the messages it conveys to students about ‘what counts’ in the programme and discipline and about what they need to do (and become) to succeed. This suggests that assessment should be designed to drive as well as measure student learning through an ‘assessment for not just of learning’ approach. A well-designed summative assessment can act as a driver for learning by making clear to students what the expectations are and how they should be demonstrated. Integration with formative activity and assessment along with support and feedback on progress can also provide students with a ‘roadmap’ for how they can meet these expectations and a means of monitoring their progress.
Two key concepts related to the development of assessment for learning are ‘feedback literacy’ which is described as “the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies” (Carless and Boud, 2018) and ‘evaluative judgement’ which is “the ability of students to ‘assess the quality of their own work, and the work of others” (Ajjawi et al, 2018).
Both these concepts relate to the idea that assessments should support learning and the need to develop the capacity of students to judge the quality of their own work and make use of feedback information. These are seen as essential skills that need to be developed not just for success within the immediate context of a modular assessment and its relationship with the programme and discipline, but beyond the point of graduation.
Clear criteria and processes for assessment remain essential, but the focus extends beyond the goal of achieving validity and reliability of measurement and testing. Further work is done to consider how these are shared with students to maximise their impact on learning and to promote students’ own capability to understand quality and assess their work. Criteria and feedback are not unidirectional and ‘given’ to students, but systematic opportunities are provided to develop students’ own ability to understand criteria in all their complexity and judge the effectiveness of their own work appropriately. They are likely to involve activities such as rubric development, use of exemplars, self-assessment, peer assessment and feedback mechanisms, and a shift in approaches to the use of such activities from ‘transmission’ uses to more active dialogic approaches. In implementing such approaches, a programme-level approach is recommended, systematically introducing and developing tasks of increasing sophistication through the stages of a programme.
Feedback literacy is framed primarily as an individual set of competencies that are likely to maximise a student’s ability to take action as a result of feedback (appreciating feedback, making judgements, and managing affect).
(Carless and Boud, 2018, p.1319)
Feedback literate students:
understand and appreciate the role of feedback in improving work and the active learner role in these processes;
recognise that feedback information comes in different forms and from different sources;
use technology to access, store and revisit feedback.
Feedback literate students:
develop capacities to make sound academic judgments about their own work and the work of others;
participate productively in peer feedback processes;
refine self-evaluative capacities over time in order to make more robust judgments.
Managing affect
Feedback literate students:
maintain emotional equilibrium and avoid defensiveness when receiving critical feedback;
are proactive in eliciting suggestions from peers or teachers and continuing dialogue with them as needed;
develop habits of striving for continuous improvement on the basis of internal and external feedback.
Feedback literate students:
are aware of the imperative to take action in response to feedback information;
draw inferences from a range of feedback experiences for the purpose of continuous improvement;
develop a repertoire of strategies for acting on feedback.
Carless and Boud (2018) identify two key activities which can provide a useful vehicle for the development of feedback literacy: Peer feedback processes and analysis of exemplars. In both cases, they suggest that activities need training and support for students to maximise the benefits and that this should be focused on dialogue about process and strategies related to assessment and feedback, not simply on the specifics of particular pieces of work. They recommend starting such activities early and providing regular opportunities to engage in activities of increasing sophistication throughout a programme.
This is described as “the capability to make decisions about the quality of work of self and others” (Tai et al 2017, 471). Boud (2018) positions this as a skill that needs explicit attention not just in the immediate context of a particular assessment for a module and how this connects to the programme level, but also as a fundamental skill post graduation.
Boud suggests that an integrated and staged approach is needed involving both teaching and learning activities and assessment tasks, and sustained development beyond the module level. Underpinning this, he called for a greater recognition of the complex and tacit knowledge involved in assessment and its criteria, that we should acknowledge this, ‘let students in’ and develop more realistic and (if necessary) holistic criteria as a start point for critical thinking rather an attempt to fully and explicitly represent quality and practice which can lead to atomisation and box-ticking approaches. Key suggestions include:
Identifying standards and criteria (students encouraged to develop their own ideas before being provided with definitive criteria)
Utilising criteria (regular opportunities to practise making judgements; tasks with increasing levels of sophistication; making space for discussion of nuances and complexity; referencing programme and stage outcomes)
Use of exemplars (incorporating dialogue about multiple contrasting examples; starting with more extreme examples working towards finer degrees of discrimination)
Self assessment (over time and over multiple tasks)
Peer assessment (focused on giving rather than receiving feedback; formative and qualitative rather than focused on grading)
Incorporating prior self-assessments
Integrating feedback dialogue (feedback focused on calibration and the quality of students’ own assessment of their work; opportunities for students to communicate what they were aiming for beforehand and to discuss outcomes)
Including opportunities for post-feedback learning action planning
Section 14.1 of the Guide to Assessment, Standards, Marking and Feedback outlines the responsibilities at department, programme and module level related to outlining and refining standards. In many cases, this involves attempts to provide explicit statements of expectations mapped to grade bands, appropriate to the stage, and specific to the assessment format. These can then be used for calibration and moderation purposes across marking teams and for documenting standards and marking processes for external examination processes.
It is common practice to use such criteria sets to provide students with information about their assessment, and of the standards and expectations by which their assessed work will be judged. Maximising the benefits for such criteria from an assessment for learning perspective is likely to involve a shift from ‘telling’ students what these criteria are towards activities helping them to understand and apply the criteria to their own work. This can involve activities such as:
Opportunities for students to discuss and identify their own criteria before they are provided with definitive criteria sets
Opportunities to apply criteria to exemplars (presenting a variety of exemplars to avoid ‘model answers’, incorporating dialogue about multiple contrasting examples; starting with more extreme examples working towards finer degrees of discrimination)
Opportunities for formative activities informed by reference to the criteria (eg peer and self assessment activities)
Ajjawi et al (2018) recommend that such activities are systematically embedded across programmes and involve attention to the processes and strategies of assessment and feedback rather than an exclusive focus on the specifics of particular pieces of work. This may make it more likely that such activities raise awareness of the complexities involved in standards to promote critical thinking and avoid the potential dangers of ‘box-ticking’ approaches.
It can be difficult to find space for such activities within timetabled activities and to maximise their potential benefits, ‘Flipped learning’ approaches can be used. These may be particularly beneficial where exemplars or peer feedback activities are used involving time to read and consider pieces of work before considering them in the light of the assessment criteria. The examples can be provided online within the VLE (along with supported by activities designed to structure engagement such as online tests or reflective journal activities).
Formative assessments provide a key opportunity for students to explore and apply assessment criteria in a low stakes way to support them towards summative assessment. From an assessment for learning perspective, it might be worth incorporating activities with a focus on feedback literacy/evaluative judgement so that time spent marking and providing feedback on formative submissions can be focused towards the development of self-reliance, and the ability to use feedback and apply criteria to work. This is likely to involve dialogue rather than uni-directional feedback. Examples of possible approaches include:
Students attempt to apply the criteria to their own work and feedback is focused not only on the quality of the work in relation to the criteria, but also on the quality of the student’s own attempt to judge their work. This may be beneficial from an evaluative judgement development perspective for all students regardless of the quality of their submission.
A peer-feedback approach is incorporated with appropriate support for students in giving and receiving feedback.
Reflective activities focused on the process of carrying out an assignment as well as on the end product, and incorporating, as appropriate, self assessment processes, requests for feedback on targeted aspects, or an action planning phase following receipt of feedback.
Options to support feedback dialogue within the VLE include the use of the journal tool for long-term reflective dialogue between students and staff.
Depending on the needs of the assessment, a broad range of approaches are possible. For example:
Sethina Watson (History) replaced marking and return of written comments on submitted formative essays with a one-to-one face-to-face marking meeting focused on process as well as end product and incorporating a discussion about the criteria and the identification of actions for improvement.
Bill Soden (Education) provided screen recorded feedback talking students through aspects of their work which enabled him to provide more detailed feedback and provide a more personalised form of feedback to frame the affective impact of critical comments.
Case study: Screencast commentary for formative feedback
Feedback processes may be explicitly mapped to criteria statements to increase the efficiency, consistency and focus of feedback on summative work. The use of rubric tables and feedback forms can provide a framework for such activities. These can be incorporated into the workflow for submission, marking and return of anonymous summative submissions.
Exemplars can provide students with a rich source of information on standards and can form the basis of activities helping students to understand and apply criteria to their own work. To avoid the potential dangers of students perceiving exemplars as ‘model answers’ to be copied, it is likely to be useful to provide a range of exemplars highlighting the different approaches taken and providing a framework for activities analysing and comparing different samples and applying the lessons learned to their own work.
It is important that permission is secured from students for their work to be used as exemplar material with future cohorts.
Ajjawi, R., Tai, J., Dawson, P. and Boud, D. (2018) Conceptualising evaluative judgement for sustainable assessment in higher education in D. Boud, R. Ajjawi, P. Dawson and Tai, J. (Eds.), Developing Evaluative Judgement in Higher Education. Abingdon: Routledge.
Boud, D. (2000) Sustainable Assessment: Rethinking assessment for the learning society, Studies in Continuing Education, 22:2, 151-167.
Carless, D. and Boud, D. (2018) The Development of Student Feedback Literacy: Enabling Uptake of Feedback. Assessment & Evaluation in Higher Education 43 (8): 1315–25.