The Crown Institute of Higher Education (“CIHE”) has designed this policy to ensure that all student assessment tasks are appropriately designed to determine the extent to which students have met the learning and skills outcome requirements within a unit of study and to assist teaching staff to make decisions about the performance of individual students within a unit.

This framing statement from the CIHE Student Assessment Policy suggests that assessment has a monitoring role regarding student performance, especially as this relates to learning outcomes, implying that assessment is, in part, a quality assurance mechanism for student attainment. Assessment in our policy at CIHE and across the sector carries a significant burden of quality assurance that positions assessment practices more firmly at the core of quality assured higher education. This has also had the effect of making assessment much more significant if not singular in its significance to the student learning experience.

The rationale for assessment presented in the CIHE policy is:

The rationale for assessment is to:

  • promote, enhance, and improve the quality of student learning through feedback that is clear, informative, timely, constructive and relevant to the needs of the student,

  • measure and confirm the standard of student performance and achievement in relation to a unit of study’s defined learning objectives,

  • reward student effort and achievement with an appropriate grade,

  • provide relevant information in order to continuously evaluate and improve the quality of the curriculum and the effectiveness of the teaching and learning process.

There are a set of key assumptions and priorities here in the CIHE documentation:

  1. One key assumption is that students learn from ‘feedback’. If this is the case, then feedback needs a structure and a process that is shared and clear and that can be monitored.

  2. Assumes a measurability and a comparability of ‘standards’ of performance and achievement. If this is the case, then we must use mechanisms and have in place processes for determining the standards of student assessment and using these to assess completed work.

  3. Assessment is to be useful and positive (reward focussed) rather than punitive. We must have in place systems that ensure this.

The last dot point of the rationale suggests that assessment practice should have a quality assurance role and impact. These points together then, suggest that, assessment structures and practices have a dual focus in educational practice.

  1. Assessment is intended to be student-oriented, that is, it is meant to support and promote student learning, maximising the possibility of students achieving learning outcomes.

  2. Assessment is also intended to have an outward facing focus, especially in Australia and other countries that use post-Bologna quality assurance frameworks for higher education, in that, assessment structures and practices are expected to demonstrate outwardly that an institution will, through its approach to assessment, assure the quality of student learning.

This is a hefty burden for one part of an educational process. Much research has noted this and, further, has noted that the adoption of criteria-based or criteria-referenced [1] assessment has developed as a response to both an outward facing quality assurance agenda and an inward, student-facing agenda.

Putting aside the critical question about whether these two things are actually compatible, [2] two key questions emerge from the literature that require consideration at CIHE:

1. The limits and extent of useful specification of criteria and standards (C&S) in assessment practice. If assessment is, as it should be here at CIHE, a part of pedagogy, then the process of specification of C&Ss and the use of these in assessment design and delivery should serve broad pedagogical aims not simply the need to ‘align’ on paper the ULOs of a unit of study with the assessment tasks. Our use of C&S needs to be ‘real,’ that is, useful and pedagogically effective.

  • What pedagogical purpose do C&S serve in CIHE units of study?

  • How might their use be approached to support student learning?

2. How do we establish an approach to C&S-based assessment here at CIHE that is shared by staff and with students, not burdensome but invigorating, and that resists the tendency of these approaches to teach toward assessment, reducing the value and impact of other vital aspects of undergraduate pedagogy?

The intention of this paper is to initiate and sustain a discussion of key aspects of our approach to assessment here at CIHE, creating a ‘community of practice’ for assessment as part of a broader picture of pedagogical practice.


Before a fuller discussion of the approach to C&S-based assessment here at CIHE, staff are asked to remind themselves of the key areas of our Strategic Plan and Teaching and Learning Plan:

From the current CIHE Strategic Plan:

Strategic Goal 4: Build CIHE’s capability and capacity

  • 4.2 Establish high quality teaching and learning
  • 4.2.1 Develop recruitment and professional development policy and procedure
  • 4.2.2 Develop and support a scholarly culture of teaching, learning and research
  • 4.2.3 Establish links with industry for input into culture of professional/industry readiness.

From the current CIHE Teaching and Learning Plan:

T&L focuses:
Quality curriculum and course design

  • Enhance student engagement and learning through curriculum design, assessment strategies, and an orientation to work-readiness – work oriented learning

  • Support student learning competencies through explicit assessment design involving criteria and standards

Quality teaching

  • Support staff to continually develop their educational practice through professional development

  • Promote and support research into the scholarship of teaching and learning

Like many NUHEPS in Australia we purchased our policy framework from consultants. This meant that we needed to tailor it to our educational approach and the educational priorities that stem from our approach to education. To date, in our policies, we have already flagged some ‘operationalising’ relevant to the points above, that is, we have started the process of determining in policy how we will execute our plans and achieve our strategic goals. This happened through the determination that we would use criteria and standards:

The assessment specifications will be provided in the unit learning guide, including the explication of criteria and standards or a rubric. The generic shape of such tasks will be made explicit in these specifications and in classes.

Unit Learning Guides should advise students at the beginning of a unit of study how all assessment results are to be combined to produce an overall mark for the unit. In particular, the Unit Learning Guide should make expressly clear:

  • the weight of each task in contributing to the overall mark,

  • the formulas or rules used to determine the overall mark – criteria and standards and/or rubrics,

  • minimum standards that are applied to specific assessment tasks, and the consequences if such standards are not met (including failure to submit particular tasks) which will, where appropriate, be presented as a marking rubric,

  • rules regarding penalties applied to late submissions, and

  • precise details of what is expected in terms of presentation of work for assessment.

This paper extends the focus from policy – statements about what should or will be done – to the practice of assessment at CIHE. It provides discussion and guidance about how this is to be done. The next section of the paper offers some useful background to what drives criteria-based assessment, starting with a brief discussion of the Bologna process.

Criteria and standards-based assessment

Quality assurance in higher education appears to have emerged from European movements in the later 20th and early 21st century. The Bologna Process, used to ensure compatibility across European institutions (to facilitate transfer of students, credit and credentials), drove the move to quality assurance. The Bologna Process, as described by Biggs and Tang (2011) has been ‘essentially a transnational managerial process, it has strong implications for teaching at the institutional and individual classroom levels.’ (8)

Biggs and Tang 2011 explain that the Bologna process initiated ‘a paradigm shift towards a more learner-centred approach.’ To make this happen an engagement with the pedagogy of teaching and learning and an understanding and use of constructive alignment is necessary. (276) This orientation, Biggs and Tang explain, should drive both ‘policies and procedures that encourage good teaching and assessment across the whole institution’ and a shift of focus ‘from the teacher to the learner, and specifically, to define what learning outcomes students are meant to achieve when teachers address the topics they are meant to teach.’ (9)[3]

One of the key mechanisms for this kind of quality assurance in student-centred teaching involves the use of ‘criteria-referenced assessment’.  They note, that Australia has, like many countries following the developmental process stemming from Bologna, developed outcome-based teaching and learning (OBTL) (9).

The logic that links this to the micro-management of assessment approaches and practices is that in OBTL,

They further explain that in OBTL, the concern is:

OBTL assessment is carried out by seeing how well a student’s performance compares to the criteria in the outcome statement; that is, assessment is criterion-referenced. Students are not assessed according to how their performances compare with each other and then graded according to a predetermined distribution such as the bell curve. (11)

… not so much a matter of what topics to teach, but what outcomes students are supposed to have achieved after having been taught. Defining those intended learning outcomes becomes the important issue, and assessment is criterion-referenced to see how well the outcomes have been attained.’ (14)

So, in criterion-referenced or criteria- and standards-based assessment, the expectation is that because ‘students are assessed on how well they meet preset criteria,’ they will see that ‘to get a high grade they have to know the intended outcomes and learn how to get there.’ (38)

This model of assessment seeks to grade students work in order to assess changes that result from learning, determining how well something has been learned. Both for quality assurance and for feedback purposes, what is reported on (assessed/graded) is how the student’s measures against criteria.

To make their explanation clear, Biggs and Tang use a distinction between the ‘measurement model’ and the ‘standards model’. Like much scholarly work in this area of teaching and learning there is a rhetorical strategy operating here that creates a binary and valuing distinction. In reality the situation is more complex and cross-pollinating. However, for our purposes, it is useful to start with Biggs ad Tang’s distinction.

Our assessment process and our quality Assurance Framework here at CIHE is informed by the process they suggest (Biggs and Tang 2011: 232-233)

The CIHE criteria and standards model

Simply put: a standards model of assessment does not compare students but measure student performance in assessment against a set of standards (criteria) linked to ULOs. In other words, when assessing students, we are always assessing their demonstration or performance of unit learning outcomes.

In designing assessment tasks, staff should outline:

  1. The specific criteria that will be used to assess a student’s performance in a piece of work

  2. Standards for each criterion

  3. The alignment of the C&S with the ULOs.

The constructive alignment toolkit explains that, to engage in adequate monitoring of the alignment between assessment and ULOs, you are asked to monitor your unit assessment in terms of the sufficiency of its alignment.  It also implies that ULOs can be written before assessment is designed. However, these processes are interdependent and it may be the case that, when developing or redeveloping a unit, a staff member may work between ULOs and assessment criteria and standards in order achieve a harmonic balance between the two.

Some examples of criteria and standards:

In some units it is possible that ULOs will be directly translatable into criteria.

In BUS101 Business Communication, the ULOs are used as criteria. For example, the ULO ‘Demonstrate knowledge in preparing strategic, policy and risk related information for formal proposals’ is listed as an assessment criterion for a task. The standards of achievement are then designed to articulate to this criterion for example the credit level specification of standard, for example: ‘Good synthesis of information into an appropriate format.’

In other units, a conceptual step will exist between the ULOs and the assessment criteria. This depends on crucial ‘intermediate reasoning’ (see the CA toolkit). For example, in BUS104 Business Statistics, the ULO ‘Demonstrate an understanding of statistical reasoning’ is of a linguistic order that means that, in order to gauge students’ capacity to do this in assessment tasks, criteria one-step-down are needed. This itemizes the component parts of the broader idea of ‘statistical reasoning.’ An example of these one-step-down criteria would be the following: ‘Appropriate analytical steps and process followed,’ ‘variables defined and discussed,’ and ‘appropriate analytical methods used.’ This articulation of criteria for an assessment that will ascertain student’s capacity to ‘demonstrate’ something at the ULO level achieves the dual aims of:

1. Assuring the quality of the assessment in CIHE’s capacity to measure student outcomes, and most importantly, and

2. Details for and directs students in how to understand their own work toward achieving ULOs.


“Rubrics are a very popular way of documenting and informing students of criteria and standards. They also have the benefit of providing a snapshot of feedback set against the C&S and the ULOs. LMSs also have in-built capability and thus support the use of rubrics.  But Rubrics are not always the best solution for the statement of criteria and standards, for grading or for feedback. The next paper in this series provides some critical discussion and guidance at to the use of rubrics at CIHE.”

Dr David McInnes

August 2017

[1] These terms are interchangeable in the literature. At CIHE we use the term ‘criteria and standards based assessment.’

[2] It could be argued that the priorities of one (the quality assurance of outcomes) has negatively impacted the understanding of teaching and learning and shifted and/or undermined other critical aspects of Higher Education pedagogy.

[3] Staff are referred to the paper Student Centred Teaching at CIHE available on the Quality Teaching and Learning at CIHE page.

If the intended learning outcomes are written appropriately, the job of the assessment is to enable us to state how well they have been met, the ‘how well’ being expressed not in ‘marks’ but in a hierarchy of levels, such as letter grades from ‘A’ to ‘D’, or as high distinction through credit to conditional pass, or whatever system of grading is used. Deciding at the level of a particular student performance is greatly facilitated by using explicit criteria or rubrics (…). These rubrics may address the task, or the intended learning outcome. (Biggs and Tang 2011: 207)

An appropriate assessment task (…) should tell us how well a given student has achieved the ILO(s) (ULOs) it is meant to address and/or how well the task itself has been performed. Assessment tasks should not sidetrack students into adopting low-level strategies such as memorizing, question spotting and other dodges. The backwash must, in other words, be positive, not negative. It will be positive if alignment is achieved because then … the assessment tasks require students to perform what the ILOs (ULOs) specify as intended for them to learn. (Biggs and Tang 2011: 224)

Staff should use this process to reassure themselves that they will be able to make an informed decision about student attainment of ULOs.