Performance Monitoring in Schools
Submitting Institution
University of DurhamUnit of Assessment
EducationSummary Impact Type
SocietalResearch Subject Area(s)
Education: Curriculum and Pedagogy, Specialist Studies In Education
Summary of the impact
The Centre for Evaluation and Monitoring (CEM) at Durham University has
pioneered the conceptualisation and development of fair and accurate
school performance monitoring systems, which report the relative progress
of pupils (value-added). Schools, local authorities and jurisdictions use
the data generated by these systems to inform their strategy and practice
with the aim of improving pupils' educational outcomes. Around 6,000
schools a year from the UK and across the world collaborate in this
distributed research network established by CEM. In addition to the direct
benefits to the three quarters of a million pupils assessed each year,
their parents and their schools, the analyses of the unique longitudinal
datasets generated by CEM's monitoring systems have significantly impacted
on educational policy.
Underpinning research
The performance monitoring systems provide schools with reliable and fair
comparative information about their pupils' educational performance, for
self-evaluation and school-based research purposes. Today, CEM leads an
international distributed network of these performance monitoring systems
on a global scale which schools, local authorities and educational
jurisdictions pay to join. It has researched and developed its own
educational and attitudinal assessments for pupils aged between 3 and 18
years. These are administered in schools and then returned to CEM for
marking and analysis, either on paper, or, increasingly, electronically.
GCSE and Post-16 qualifications are also used to analyse the progress of
older students. This analysis includes norms against which schools can
compare the ability and attainment of their pupils, and their relative
progress, or the `value-added' they have provided, which draws on the
comparative data across the research network contributed by each of the
partner schools.
Conceptual development of the monitoring systems
The conceptualisation of the assessment systems has developed over many
years but the significant research and thinking which underpinned them was
developed and published by Fitz-Gibbon and others at Durham between 1996
and 2003. The motivation for this research arose from the inadequate
methods previously used to judge pupil and school performance which failed
to take account of factors such as pupils' ability and their progress made
during the time at a school. In a 1996 publication (Monitoring
Education: Indicators, Quality and Effectiveness), Fitz-Gibbon
outlined the idea of distributed research; namely that performance data
are fed back to the unit responsible for their generation, for them to
interpret and act on. Within an education system, information about
pupils' progress should be fed back to the pupils, teachers, management,
education authorities and jurisdictions. Key insights from this
publication were further developed by Fitz-Gibbon, Tymms and Coe, with
outputs published that reflected this research and thinking [R1, R2, R3
and R4]. CEM's integrated performance monitoring systems for schools are
based on this conceptual design.
Methodological development
The significant advance in the methodology underpinning performance
monitoring systems resulted from a major contract awarded to Fitz-Gibbon
and Tymms by the Schools, Curriculum and Assessment Authority (SCAA) to
design a national monitoring system for England, the recommendations of
which were published by Fitz-Gibbon in 1997 [R1]. This research was
pivotal in increasing awareness at policy level of the importance of using
value-added information to improve children's outcomes and which, in turn,
has supported the development of CEM's systems. This is evidenced in
discussion and actions within the Department for Education and Ofsted and
the expansion of the voluntary adoption of CEM's systems by schools and
local authorities. A distinction between the uses of performance data for
official accountability and for professional development was proposed by
Tymms (1999) [R2], and a range of theoretical considerations in the design
of monitoring systems were presented by Fitz-Gibbon and Tymms (2002) [R4].
This methodological work underpinned the further expansion of CEM's
monitoring systems. It took as its basis for a robust and successful
national value-added system the requirements for readily understandable
and statistically valid data whose collection and interpretation would not
be burdensome for schools and which would also be cost-effective. It
investigated different methods of calculating value-added and concluded
that simple statistical predictive modelling provided as good information
for use by schools and teachers as more complex approaches such as
multi-level modelling. To obtain the maximum impact from CEM's monitoring
systems, it is important that users can trust and understand how to
interpret the information they receive. Recommendations based on the
research [R1, R4], involve simple mathematical models which can be readily
accessed by teachers and others who come with a wide range of mathematical
expertise, but who all need to use the data to improve the outcomes of the
pupils in their care. The implementation of these recommendations can be
seen within CEM's existing performance monitoring systems. The distributed
assessment network has generated a large longitudinal dataset which
continues to grow, and is unique in the scope and scale of the data
accumulated. In addition to providing information to inform teachers' and
schools' practice, the data have therefore been analysed at system-level,
resulting in a number of significant publications which have impacted on
educational policy. For example, Coe (2007) reported on grade inflation at
GCSE and A Level, which can be directly linked to impact at policy-level
[R5].
CEM was initially established at Newcastle University, but in January
1996 the Directors (Fitz-Gibbon and Tymms) took up posts at Durham
University, moving CEM with them. Fitz-Gibbon directed CEM at Durham from
1996 until her retirement in 2003. Tymms and Coe have remained at Durham
since 1996. The key conceptual, methodological and practical development
of the performance monitoring systems for schools, therefore, took place
at Durham.
References to the research
R1 Fitz-Gibbon, C. T. (1997). The Value Added National Project Final
Report — Feasibility studies for a national system of value-added
indicators. London: SCAA. ISBN 1 85838 249 1 Available at: http://bit.ly/16XcxVM
(accessed 30/7/13).
This publication arose from the contract with SCAA which ran from March
1995 to December 1996, with a value of £200,000.
R2 Tymms, P (1999) Baseline Assessment and Monitoring in Primary
Schools: Achievements, Attitudes and Value-added Indicators. London:
David Fulton Publishers. In a review by Ted Wragg in the TES (15/10/99) he
noted that it included "the clearest explanation I have seen of the
strengths and limitations of value-added approaches."
R3 Tymms, P., & Coe, R. (2003). Celebration of the Success of
Distributed Research with Schools: the CEM Centre, Durham. British
Educational Research Journal, 29(5), 639-653. DOI:
10.1080/0141192032000133686.
R4 Fitz-Gibbon, C.T. and Tymms, P. (2002) Technical and Ethical Issues in
Indicator Systems: Doing things right and doing wrong things. Education
Policy Analysis Archives, 10(6). Available at: http://epaa.asu.edu/ojs/article/view/285
(viewed 30/7/13). This article has been viewed 16,014 times since January
16, 2002 (EPPA 30/7/13)
R5 Coe, R. (2007) Changes in standards at GCSE and A-Level: Evidence
from ALIS and YELLIS. Report for the Office of National Statistics,
April 2007. Curriculum, Evaluation and Management Centre, Durham
University. Available at: http://bit.ly/17gUBJf
(accessed 30/7/13). This article has been viewed 637 times since 1/1/08 on
the CEM website and 607 views since 29/03/12 on the ONS website.
Details of the impact
Reach of CEM's Monitoring Systems in Schools
Since 1997, CEM's performance monitoring systems have made a significant
impact in terms of their reach, evidenced by the rapid expansion of
age-ranges covered, the areas of development assessed, and the numbers of
schools, local authorities and jurisdictions buying into the distributed
research network for comparative information about their pupils' progress.
From 2008-13, a total of 9,609 schools contributed their assessment data
to CEM's systems, with around 6,000 of these adopting the educational
assessment systems consistently year on year, resulting in a total income
from the research network of almost £27 million. Across this period,
4,119,964 pupil assessments were undertaken. Education authorities have
actively endorsed the use of CEM's systems. For example, in the 2012/13
academic year, 18 out of the 32 Scottish authorities recommended that
their schools used the data to monitor performance so that support and
resources could be appropriately targeted [S1]. One of CEM's systems
(InCAS) was used on a statutory basis by all primary schools in Northern
Ireland for five successive academic years (2007/8 - 2011/12) [S2]. This
totalled 1,076 schools and 350,000 pupil assessments during this period.
Within the REF period CEM's systems have also spread from the UK.
Satellite centres have been established in Australia, New Zealand and Hong
Kong. In Australia, the Performance Indicators in Primary Schools (PIPS)
system, which assesses what children know and can do when they start
school and their progress during their first year, was introduced in
December 2000. Over the following decade the uptake by schools and
authorities grew, peaking in 2009 when 814 schools were registered and
more than 27,000 pupils were assessed. Individual schools have adopted the
system in Australia and so have educational authorities during the REF
period: Australian Capital Territory Department of Education, Tasmanian
State Department of Education, the Tasmanian Catholic Education Office and
the Western Australian Catholic Education Office [S3]. The use of PIPS is
statutory for all primary schools in Tasmania [S6]. The Abu Dhabi
Education Council (ADEC) mandated the use of the PIPS assessment in all
166 state primary schools with the first full cycle of assessments taking
place in November and December 2011. This involved approximately 200
teachers being trained to administer it to all children, starting with
Kindergarten classes, and to interpret the feedback to inform their
practice [S4, S5]. Its use was rolled-out to other cohorts (KG2, Grade 1
and Grade 2) and a total of 51,632 pupil assessments were administered
between the Autumn of 2011 and April 2013.
Impact of CEM's Monitoring Systems on the Education System
Pupils' assessment results and their analysis have been used at different
levels in educational systems to impact on schools' and systems' practice
and policy. Case studies illustrate how it is used to inform schools'
practice and strategy [S6]. In one of these case studies, a senior manager
explains that the data enabled staff to "act much more quickly to help
pupils with learning difficulties that had not been picked up by their
junior schools, before these children start to fall behind their peers
and suffer the frustration and unhappiness that comes with this." He
noted that; "parents have generally welcomed an objective assessment of
their children's potential".
In February 2013, Fife Local Authority gave a presentation outlining how
the Authority uses the data to 85 delegates representing 19 Scottish
authorities, at an event organised jointly by the Highland Council and CEM
[S7]. The impact in this instance is the benefits of the use of CEM's data
across Fife reported in the presentation, as well as the further sharing
of methods of effective use of CEM data to track the performance of
schools and groups such as children from economically deprived
backgrounds, which inform local policy decisions about resourcing and
intervention.
For five successive years (2007/8 - 2011/12), at national level in
Northern Ireland, schools were advised by the Department for Education in
Northern Ireland to use CEM's research and assessment data to inform their
practice, identifying strengths and weaknesses of children so that
education could be tailored to their needs, and monitoring their progress
over time, and they were required to report the pupils' scores to parents
[S2]. In Abu Dhabi, the Director of Education at Bidayaat reported that
PIPS data "provided impetus for key pedagogical strategies" [S4].
Impact on National Initiatives
In addition to the direct impact of CEM's monitoring systems on its
research partners in schools and local authorities, described above, the
research conducted by Fitz-Gibbon and others described in Section 2 has
also influenced the development of policy in England. The Statistician
Team Leader at the Department for Education, has described how the
Fitz-Gibbon report in 1997 influenced policy-makers within the department
to see using comparative pupil level data, or `value-added', as something
which was understandable and achievable in a simple and straightforward
way. This led to the development of the RAISEonline system which was
launched in 2006 and continues to be used in all English state schools (up
to 2013) to analyse the results of the statutory assessments [S8]. There
is a clear link between Fitz-Gibbon's (1997) Value Added National
Project Final Report [R1] and the contribution of CEM's
conceptualisation to RAISEonline which is acknowledged by the Department
for Education, though there were other factors that also contributed to
the development of these innovations.
CEM's research on assessment, drawn from its testing and value-added
approach, has also had an important impact on national policy on Key Stage
tests in England. Tymms' evidence about standards and the use of tests was
influential in the Children, Schools and Families Committee's Third Report
on the reform of National Testing, published in May 2008. It concluded
that the "national testing system should be reformed ... to remove from
schools the imperative to pursue test results at all costs" (p. 3). This
report was a significant contextual factor in the major changes to
national testing which took place in Autumn 2008, ending testing at Key
Stage 3 from 2009. Although the trigger for the change was a catalogue of
problems with the external marking, the background conditions had already
indicated change was necessary, with CEM research important in this
national debate. Tymms' and colleagues' work is also referred to in Lord
Bew's (2011) 'Independent Review of Key Stage 2 testing, assessment and
accountability' and CEM's research on value-added, standards over time and
computer-adaptive testing can be identified in its recommendations, such
as the emphasis on tracking progress, on the basis of objective and
accurate assessments.
A further example of impact on policy is the research by Coe [R5], which
has reported changes in `A' Level standards in England over time and has
been referenced in national debate. This research uses data from CEM's
ALIS (`A' Level Information System) and YELLIS (Year 11 Information
System) whose use rapidly and significantly increased following the
reconceptualisation of performance monitoring and value-added that had
resulted from Fitz-Gibbon's report [R1]. On 13/10/11, the Secretary of
State for Education gave a keynote address to Ofqual's Standards Summit
which referred to the research conducted by Coe in relation to grade
inflation [S9]. The report referred to investigated trends in `A' Level
performance over time from the mid-1990s up to 2006, which then led to
impact within the REF period. In 2012, the Department for Education
launched a consultation about reforming Key Stage 4 qualifications which
referred to the research published by Coe [S10]. The paper [R7] may not
have been the sole reason for consultation, but was clearly an influential
part of the discussion from which the consultation has been proposed,
evidenced by the references made to it by the minister, the parliamentary
select committee and by Ofqual. These impacts on the development of
national testing and examinations can all be traced back to the initial
research and conceptual development of CEM's performance monitoring
systems between 1996 and 2003.
Sources to corroborate the impact
S1 The Scottish education authorities whose schools are registered to use
CEM in 2013 are: Dundee City Council, Aberdeen City Council, Aberdeenshire
Council, Moray Council, Orkney Islands Council, Fife Council, Midlothian
Council, Shetland Islands Council, East Lothian Council, Dumfries and
Galloway Education and Community Services, Falkirk Education Services,
Inverclyde Council, South Ayrshire Council, Angus Council, Renfrewshire
Council, Clackmannanshire, Highland Council, Stirling. In 2013, this
involved a total of 1,281 schools in which 124,697 pupils were assessed:
www.cem.org.
S2 InCAS (Interactive Computerised Assessment System) is one of CEM's
monitoring systems used by primary schools. For more information see
www.cem.org. Instructions for its administration and use, and reporting to
parents were issued to schools by the Northern Ireland Department for
Education:
http://www.deni.gov.uk/microsoft_word_-_department_of_education_circular_2008_22-2.pdf
http://www.deni.gov.uk/incas_circular_to_schools_-_september_2009-2.pdf
http://www.deni.gov.uk/de_circular_20_incas_arrangements_for_autumn_2010_english.pdf
http://www.deni.gov.uk/english_circular_201115_-_incas.pdf
.
S3 Letter from Dean of the Faculty of Education in a leading Australian
University.
S4 Email from Director of Education, Bidayaat, Abu Dhabi: www.bidayaat.com.
S5 U.A.E. Newspaper Article: http://www.thenational.ae/news/uae-news/education/maths-and-literacy-to-be-tracked.
S6 Illustrative case studies from a selection of schools using CEM's
performance monitoring systems are presented at: http://www.cemcentre.org/case-studies-see-how-our-systems-are-helping-others.
S7 Presentation by an education statistician from Fife Local Authority.
S8 Email from Department for Education representative summarising impact
on DfE.
S9 Secretary of State for Education's speech to Ofqual Standards Summit
13 October 2011
http://www.education.gov.uk/inthenews/speeches/a00199197/michael-gove-to-ofqual-standards-summit#startcontent
.
S10 Impact on testing and exam reform: Bew, P.A.E (2011) Independent
Review of Key Stage 2 testing, assessment and accountability: Final
Report London: HMSO Available at: http://bit.ly/15E3Q7K
(accessed 30/7/13). Written evidence submitted by the DfE: The Evidence
Base for Proposed Reform of the Examination System at Key Stage 4 (Nov.
2012), para 3.13, first bullet point: available at: http://bit.ly/18rLQg0
(accessed July 29th 2013).