Enhancing University assessment through evaluating student and lecturer understanding of academic standards
Submitting Institution
University of CumbriaUnit of Assessment
EducationSummary Impact Type
SocietalResearch Subject Area(s)
Education: Curriculum and Pedagogy, Specialist Studies In Education
Summary of the impact
The research undertaken by Professor Sue Bloxham and colleagues has had a
significant impact on the approach to assessment in Higher Education. It
has influenced practitioners, universities and advisory/regulatory bodies,
providing advice for University tutors on communicating assessment
expectations and contributing to national body and university guidance to
encourage student learning and consistent marking. The research has helped
tutors understand their individual `standards framework' involving tacit,
norm-referenced knowledge, holistic judgement and local negotiation of
shared standards as well as the importance of dialogic, formative
assessment opportunities for communicating their standards to students.
This has led to improvements in assessment policies, practice and national
guidelines in the UK.
Underpinning research
One reason university students are dissatisfied with assessment is
because they think guidance, marking and feedback can be inconsistent or
hard to understand. The research was designed to address these problems in
grasping tutors' sense of quality as reflected in their marking standards
through assessment guidance and feedback. It also investigated standards
in response to students' claims that standards were idiosyncratic,
inconsistent or hard to understand.
Professor Bloxham's research builds on and links several current aspects
of international inquiry, including work on the individualised, tacit,
interpretive nature of academic standards; professional learning of
academics; the transparency agenda in Higher Education assessment and
supporting and retaining students from under-represented groups. Within
that context, the research has specifically developed insights on
assessment practice, based on the mismatch between how standards are
communicated to students and the reality of those standards in use,
including potential solutions to this problem.
Two practitioner research studies were undertaken; evaluating
interventions designed to increase tutor dialogue with students about
assessment criteria, grading and feedback. This was investigated by
studying university tutors grading student coursework. The project used
`think aloud' by tutors as they graded student assignments and wrote
feedback, followed by a semi- structured interview, in order to explore
academic judgement. Analysis of results and interviews with students
showed that students consider tutors to be inconsistent in their
assessment standards and therefore they seek dialogue with them in order
to achieve a sense of transparency regarding expectations (Bloxham &
West 2007). The research indicates that student dissatisfaction may lie in
the mismatch between the explicit presentation of assessment expectations
to students as analytical and criterion-referenced and the actuality of
tacit, holistic, and norm-referenced tutor judgement. The research
developed a novel sociocultural perspective on assessment, providing a new
metaphor to capture the nature of the professional learning that tutors
engage in when developing their individual standards frameworks.
The research provided the following insights into grading coursework and
provision of feedback:
- Highlighted and facilitated resolution of the conflicting purposes of
assessment in Higher Education, recognising that there is a
concentration of quality assurance effort on assessment systems rather
than on the quality and communication of tutor judgements;
- The essential need for tutor — student dialogue to go beyond previous
conceptions of `transparency' and support shared understanding of
standards frameworks within and between teaching teams and student
cohorts;
- That tutors generally make holistic judgements based on embedded tacit
standards frameworks, with judgements influenced by norm-referencing;
- That teams of tutors develop shared but contested `standards
frameworks' through an `interplay' between the vertical domain of public
knowledge (of subject discipline and of pedagogy) and the horizontal
domain of tutors' practical wisdom (including social and situated ways
of working);
- That at the heart of that interplay artefacts, including assessment
criteria and concepts such as `critical analysis', mediate tutor
judgements. Within the interplay, power, based on academic status,
influences the development of shared but contested local standards
frameworks in programme teams and subject discipline departments.
The research began in 2004 and continues, and has involved several
researchers: Professor Sue Bloxham (Director of Educational Research Jan
2011- present, formerly Head of Educational Development 2003-11); Dr Pete
Boyd (Reader, Education Faculty, 2010-present, formerly Senior Lecturer in
Educational Development, 2004-10), Dr Amanda West (Senior Lecturer in
Sport 2004-2010; Principal Lecturer Sport & Exercise Science,
University of Sunderland, 2010-present), Amanda Chapman (Senior Lecturer
in Economics 2004-present), Liz Campbell (Senior Lecturer in Outdoor
Studies, retired 2010), Mary Ashworth (Research Assistant 2006-13) as well
as Professor Susan Orr (York St John, 2004-12; Assistant Dean Sheffield
Hallam 2012-13; Dean of Learning & Teaching at the University of the
Arts, London, 2013).
References to the research
The quality of the research outputs listed below is reflected in their
wide citation in international journals published in the UK and elsewhere.
A further indication of quality and impact is reflected in how
practitioner publications have been developed from the research and in
invitations for consultancy that have resulted from the publications.
• Bloxham, S & West, A (2004) Understanding the rules of the game:
marking peer assessment as a medium for developing students' conceptions
of assessment. Assessment & Evaluation in Higher Education
29 (6) 721-733. DOI: 10.1080/0260293042000227254 (cited by 114)
• Bloxham, S (2009) Marking and moderation in the UK: false assumptions
and wasted resources, Assessment and Evaluation in Higher Education
34 (2) 209-220 (cited by 44)
• Bloxham, S & West A. (2007) Learning to write in higher education:
students' perceptions of an intervention in developing understanding of
assessment criteria. Teaching in Higher Education 12 (1) 77-89
Bloxham, S, Boyd, P & Orr, S (2011) Mark my words: the role of
assessment criteria in UK higher education grading practices. Studies
in Higher Education 36 (6):655-670 (cited by 12)
• Bloxham, S & Boyd, P (2011) Accountability in grading student
work: securing academic standards in a twenty-first century quality
assurance context. British Education Research Journal i-first
DOI: 10.1080/01411926.2011.569007 (cited by 6)
• Boyd, P & Bloxham, S (2013) A Situative Metaphor for Teacher
Learning: the case of university teachers learning to grade student
coursework British Educational Research Journal iFirst 9 April
2013, DOI: 10.1002/berj.3082
Details of the impact
4.1 Stimulating practitioner and stakeholder debate and challenging
conventional wisdom
The research has stimulated practitioner debate on the purpose,
methodology and practice of assessment and its effect on student
satisfaction through a range of processes:
- Contribution of the research findings to the Feedback: Agenda for
Change created by 23 researchers and writers with specialist
expertise, under the auspices of Oxford Brookes University Assessment
Standards Knowledge Exchange Centre for Excellence. The research
influenced the emphasis on feedback as a dialogical process and on
students creating their own feedback as a process of understanding
standards.
- Invited as expert contributor to Guardian Professional Development
on-line debate on authentic assessment (2012), particularly debating the
impact of a mismatch between perceptions of guidance and marking on
student dissatisfaction.
- Conference keynotes: University of Central Lancashire (2010), National
Assessment in Higher Education Conference (2008), SOLSTICE Centre for
Excellence in Teaching and Learning, at Edge Hill University 2008)
- Reference 2 used as a stimulus for a professional development debate
at Kings College London Assessment and feedback CPD event on marking,
moderating and sharing practice.
- A workshop for the Northern Universities Consortium, particularly
related to the influence of this body of research on external examiners'
understanding and use of academic standards.
- Commissioning by the Higher Education Academy to provide a national
seminar on the topic: Really Useful Information! Aligning feedback with
tutors' professional judgement (June 2011).
The research evidence and theoretical conclusions have underpinned
significant discussion in these events (and in CPD as in 4.2 below) on
rarely discussed topics such as tutor differences in marking, influences
on individual judgements and the problems associated with communicating
tutor standards to students. It has provided the evidence of a need for
further guidance on academic standards for external examiners and
therefore the Quality Assurance Agency and Higher Education Academy (HEA)
have commissioned further research on External Examiners' use and
understanding of academic standards to guide future training.
4.2. Influence on training and staff guidance
The value to Higher Education of the insights has been demonstrated by
invitations from over 20 UK and international universities and
organisations for Professor Bloxham and the research team to contribute to
staff development. Training events and consultancies have focused on:
- Increasing dialogue between staff regarding assessment standards;
- Designing assessment and assessment guidance to provide dialogical and
other opportunities for students to grasp tutor's diverse standards;
- Assisting students to recognise the tacit, holistic and variable
nature of professional judgement and the limited power of explicit
information to make tacit knowledge explicit.
This practical application of the research has been valuable to staff
involved, receiving highly positive staff evaluations and reporting of
influence on practice, for example:
"I used some of your thinking in shaping some of my own work to support
the continuing professional development of colleagues in their own
understanding of assessment." (UK, Russell Group University)
This work to disseminate the application of the research has also
specifically influenced Certificate programmes for new academic staff,
including at Aston University (2010 -2011), Northumbria University (2011),
New University of Ireland (2011-13), University of Central Lancashire
(2011), Writtle College (2011).
The research has informed institutional and national change programmes:
Liverpool Institute of Performing Arts (2009), Manchester Metropolitan
(2009), University of the Highlands and Islands (2010), University of
Sunderland (2009) and the Norwegian Higher Education Learning Outcomes
Project. The research is cited in London Metropolitan University's
Institutional University Assessment Framework in terms of
encouraging staff to develop students' understanding of quality and to
negotiate and share standards.
A marked Improvement (see 4.2) is being used to lever
transformative change in university assessment practices through the Transforming
Assessment in Higher Education Pilot scheme. To date, 14 UK
universities have used this publication to review their assessment
practices and eight have been selected to be part of the HEA pilot scheme.
This research, amongst others, underpins universities' plans to develop
methods to help students grasp tutors' concepts of quality which mirror
the ways in which tutors learn about academic quality/ standards;
predominantly through inductive methods such as being marked, marking,
using exemplars, discussion with colleagues and peer feedback on draft
work.
The research is also cited in guidance for staff, for example, University
of Edinburgh on engaging with criteria and standards and Queen Mary
University of London on reliability in marking. Through training and
guidance at a wide range of institutions, the insights of the research
have had a far-reaching influence on professional practice.
4.3 Development of Resources to enhance professional practice
The research has directly fed into the Higher Education Academy
publication designed to improve assessment, A Marked Improvement:
Transforming Assessment in Higher Education (HEA 2012), through
Professor Bloxham's contribution as an author (as evidenced on the HEA's
website). The HEA publication specifically cites the practitioner book
Bloxham S. & Boyd, P (2007) Developing Assessment in Higher
Education: a practical guide Open University Press, as a
"comprehensive" resource for educational developers and practitioners.
This book was written to disseminate the early outcomes of this research
to practitioners. It is recommended in many lecturer training programmes
nationally and internationally (e.g. New University of Ireland, London
Metropolitan University, University of Westminster, Queen's University
Belfast, University of Bradford, Bristol University, University of
Portsmouth, University of Sussex and University of York) or used to
provide advice for tutors (e.g. Griffith University, Edith Cowan
University, Australia, Imperial College, University of Huddersfield,
Montclair State University, USA). It is also used in policy documents, for
example the Assessing and Assuring Graduate Learning Outcomes
project of the Australian Learning and Teaching Council in 2011-2012, and
London Metropolitan `University Assessment Framework'. Sales of 1700 books
indicate the value placed on the work.
4.4 Use of research findings by professional bodies to define best
practice and formulate policy
Professor Bloxham is a member of the advisory group revising the
assessment section of the Quality Code for Higher Education (2013) chapter
on assessment of students, ensuring that the research underpins national
guidance to staff on good practice in assessment. The research has
contributed to a broad perception that guidance needs to be improved, has
influenced the revision of these guidelines and led to the invitation to
join the group. The revised guidance, out for consultation between May and
August 2013, notes the interpretive nature of academic judgement; the
importance of building shared understanding between staff and students of
the basis on which academic judgements are made; and the importance of
building students' assessment literacy given the complex nature of
professional judgement: these are all insights derived directly from this
research.
Whilst activities for students such as peer assessment, feedback on
drafts and engagement with exemplars have a history of use in Higher
Education, the research here is significant in taking the use of these
participative methods into a different realm. It challenges taken for
granted assumptions about shared standards, which are made explicit in
written documentation such as learning outcomes, assessment criteria and
statements of standards. To date, the impact has been on teachers'
practices, mediated through training and both institutional and national
level guidelines. It is expected that this translation into practice will
create a positive impact on student learning, achievement and
satisfaction: evaluating this represents the next stage of the research.
Sources to corroborate the impact
(cross referenced to numbers in section 4) To corroborate 4.1:
To corroborate 4.2:
To corroborate 4.3:
- Academic Lead (Assessment and Feedback), Higher Education Academy to
corroborate impact on A Marked Improvement: Transforming Assessment
in Higher Education.
- To corroborate impact on the Assessing and Assuring Graduate Learning
Outcomes project, funded by the Australian Learning and Teaching Council
in 2011-2012, final report. http://www.itl.usyd.edu.au/projects/aaglo/
To corroborate 4.4:
- Assistant Director in the Research, Development and Partnerships
Group, QAA, to corroborate impact on the assessment section of the QAA
Quality Code.