Chapter 5 - Marking

  1. Marking
    1. Principles for Marking Assessments
      1. Colleges should develop an approach to marking assessment(s) that is consistent with the following principles:
        1. All marking must be based on the quality of students’ work and be free from bias or prejudice (see 5.3).
        2. No module’s marking should rely solely on the judgement of one marker.
        3. All summative assessment must be subject to moderation.
        4. Where anonymity of candidates cannot be assured double blind marking should be applied to a sample.
        5. All Colleges must publish marking criteria for all assessment.
        6. The relevant marking criteria must be applied consistently.
        7. Staff must be willing to use the whole range of marks when marking assessment(s). Where a marking scheme is introduced which does not use the full scale of marks this must be clearly communicated to students.
    2. Pass Mark for Individual Modules
      1. The pass mark for individual modules is as specified below. Modules failed at any level will normally be required to be condoned or referred, as outlined in Chapter 11 - Consequences of failure in assessment.
        1. The pass mark for individual modules at Levels 3-6 is 40%. Marks below 40% constitute failure.
        2. The pass mark for individual modules at Level 7 is 50%. Marks below 50% constitute failure.
      2. Where a student on an undergraduate programme is taking a module at Level 7 the module must be marked according to the normal postgraduate marking criteria for the module and the marking scheme for postgraduate modules.
      3. Where a student on a postgraduate programme is taking a module at Level 6 or below, the module must be marked according to the normal undergraduate marking criteria for the module and the marking scheme for undergraduate modules. The mark obtained must be used in the calculation of the credit-weighted mean for the programme as a whole (i.e., there must be no ‘scaling' of marks).
      4. The mark obtained must be used in the calculation of the credit-weighted mean for the programme as a whole (i.e. there must be no ‘scaling’ of marks).
    3. Anonymity
      1. The most effective means of demonstrating that marking is free from bias or prejudice is to ensure that students’ assessment is anonymous. All assessments should be anonymous. However, the University recognises that this is not always practically possible. Where assessment cannot be anonymous Colleges must ensure, and be able to demonstrate, that marking is fair, reliable, consistent and transparent. Students must be fully informed of the marking criteria and processes.
    4. Moderation and Sampling
      1. Moderation is the process used to assure that assessment outcomes are fair and reliable, and that assessment criteria have been applied consistently. Any moderation method must be proportionate to ensure fairness, reliability and consistent application of the criteria.
      2. It is not always necessary for all work to be moderated. In many circumstances, it is sufficient for a sample of assessments to be moderated. Where multiple markers are used to mark a batch of assessments, sampling should be undertaken with regard to each marker rather than with regard to the whole batch of assessments. A number of approaches to moderation can be applied, all of which may be undertaken on a sample only:
        1. Double blind marking: where a piece of work is marked by two markers independently, who agree a final mark for the assessment. Neither marker is aware of the other’s mark when formulating his/her own mark.
        2. Double open marking: where a piece of work is marked by two markers, who agree a final mark for the assessment.
        3. Calibration of marking within teams of multiple markers, in advance of team members marking their own batch of assessments. Calibration involves the scrutiny of a sample of submissions being graded by all markers collectively. The sample should be sufficient in number to ensure the grading approach being taken by all markers is consistent. Following calibration processes, the subsequent moderation processes may be limited to scrutinising (i) submissions that are borderline (e.g. within 1% of a class boundary), and (ii) other submissions considered to be in need of moderation by the module lead.
        4. Check marking: where an assessment is read by a second marker to determine whether the mark awarded by the first marker is appropriate.
      3. Where double marking or check marking is applied as the method of moderation the marking team should agree a final set of marks for the whole cohort and if they cannot agree a final mark, a third marker should be used to adjudicate an agreed mark.
      4. These processes should also identify the marking patterns of individual markers to facilitate comparisons and identify inconsistencies.
      5. Where model answers are agreed by staff marking assessments, it is allowable for these assessments not to be moderated. However, the model answer must be reviewed and agreed by at least two markers in advance.
      6. Sampling: it is appropriate for sampling to be applied to all the methods of moderation set out above. Where sampling is employed, the following must be adhered to:
        1. The sample must be representative and cover the full range of marks;
        2. The sample must be sufficient to assure the APAC and External Examiner(s) that the requisite academic standards have been maintained, and that all marking is fair, reliable and valid (i.e. free from bias or prejudice, based on the quality of students’ work, and consistent with the relevant marking criteria);
        3. APACs and External Examiners must be informed of the methodology (or methodologies) by which assessments are selected for internal moderation, so they can advise on its sufficiency and appropriateness.
        4. The following should be adhered to:
          1. The sample should not be the same sample as used in external moderation;
          2. The selected sample should be proportionate to the risk to standards posed by each module/assessment, bearing in mind the credit-weighting of the assessment, the experience of the primary marker, and historic trends, such as whether the module or assessment are new or have recently changed in structure/format, or if marks have previously had to be adjusted as a result of moderation/scaling;
          3. Where responsibility for assessing full submissions (as opposed to selected sections/questions) is distributed amongst a team of multiple markers, marking calibration processes should occur in advance of each marker marking their batch of assessments, in the following circumstances: a new team (or team member) is undertaking the marking, the form of assessment is new, and/or the module is new (or significantly revised);
          4. Where possible, the sample should include at least one item marked according to the marking guidelines for specific learning difficulties.
          5. Where a cohort includes a submission(s) made via an alternative form of assessment (as per the Inclusive Practice within Academic Study policy), the sample should include at least one alternative assessment item.
        5. Below is one suggested approach to sampling that may be adopted:
          1. For modules, where there is only one primary marker, at least XX% or a minimum of XX (whichever is greater) of the submitted assessments, but to a maximum of XX submissions in total. (E.g. (a) at least 10% or a minimum of 10 (whichever is greater) of the submitted assessments should be moderated, but to a maximum of 25 submissions in total; or (b) at least 5% or a minimum of 5 (whichever is greater) of the submitted assessments, but to a maximum of 15 submissions in total.)
          2. For modules, where multiple markers are used to mark a batch of assessments, sampling should be undertaken as above with regard to each marker rather than with regard to the whole batch of assessments. (This does not apply (i) where each member of the marking team takes responsibility for marking specific sections/questions: in that situation standard sampling should be undertaken as above, or (ii) where marking calibration processes are undertaken in advance of team members marking their own batch of assessments.)
    5. Generic Mark Scheme
      1. The University has a generic mark scheme (that draws on QAA1 and SEEC2 guidelines) that characterises the level of complexity, demand and relative autonomy expected of students at each Level of the curriculum (as detailed in the Credit and Qualifications Framework). The generic mark scheme can be found here.
      2. All marking criteria must be consistent with the University's published percentage boundaries (see Chapter 9) for degree classification.
    6. Marking Criteria
      1. To ensure consistency all summative marking processes should be numerical, unless an alternative scheme has been approved by the Dean of the relevant Faculty and has been clearly communicated to students.
      2. External Examiners must have an opportunity to comment on the assessment criteria and model answers for all summative assessments.
    7. Scaling of Marks
      1. The purpose of scaling is to rectify anomalies in mark distributions that arise from unanticipated circumstances and should be used in exceptional circumstances only. Hence, the assessment criteria and practices for any module that has its marks scaled should be reviewed in order to reduce the chance that scaling will be necessary in subsequent years. Guidance for scaling is set out in Annex G. The guidance should be read in the context of this Handbook, and the provisions of this Handbook remain in force.
      2. Where scaling is employed for adjusting agreed assessment marks within a module to correct abnormal group performance, the following rules must be adhered to:
        1. The raw marks, together with the rationale under which they were awarded, must always be made available to the Assessment, Progression and Awarding Committee.
        2. Scaling must not unfairly benefit or disadvantage a subset of students (e.g. failures). This means that any scaling function applied to a set of marks must be monotonically increasing, i.e. it must not reverse the rank-order of any pair of students. The definition of any scaling function used (its domain) must encompass the full range of raw marks from 0 to 100%. For example, 'Add 3 marks to all students' or 'Multiply all marks by a factor of 0.96' are both valid scaling functions. 'Add 4 marks to all failures and leave the rest unchanged.' is not acceptable because it would cause a student whose raw mark was 39 (a fail) to leapfrog a student who got 41 (a pass).
        3. External Examiners must always be consulted about the process.
        4. All decisions must be clearly recorded in the minutes of the Assessment, Progression and Awarding Committee (APAC), and must include details of the rationale for scaling, any noted objections (and any responses to these objections) and the impact on marks.
        5. The system used to identify modules as potential candidates for scaling must be transparent.

          Annex G - Scaling Guidance
    8. Marking the Work of Students with ILPs or Diagnosed with Specific Learning Difficulties (where competence of language is not being assessed)
      1. For guidance on a range of accessibility issues, including dyslexia marking guidelines, refer to the Wellbeing Services website.

1Quality Assurance Agency frameworks for higher education qualifications and credit

2Southern England Consortium for Credit Accumulation and Transfer 

Back to top