Log in
Researchers in Cambridge have developed a data standard for storing and exchanging data between different programs in the field of macromolecular NMR spectroscopy. The standard has been used as the foundation for the development of an open source software suite for NMR data analysis, leading to improved research tools which have been widely adopted by both industrial and academic research groups, who benefit from faster drug development times and lower development costs. The CCPN data standard is an integral part of major European collaborative efforts for NMR software integration, and is being used by the major public databases for protein structures and NMR data, namely Protein Data Bank in Europe (PDBe) and BioMagResBank.
Ocean circulation accounts for much of the energy that drives weather and climate systems; errors in the representation of the ocean circulation in computational models affect the validity of forecasts of the dynamics of the ocean and atmosphere on daily, seasonal and decadal time scales. Research undertaken by the University of Reading investigated systematic model errors that resulted from data assimilation schemes embedded in the key processes used to predict ocean circulation. The researchers developed a new bias correction technique for use in ocean data assimilation that alleviates these errors. This has led to significant improvements in the accuracy of the forecasts of ocean dynamics. The technique has been implemented by the Met Office and by the European Centre for Medium Range Weather Forecasting (ECMWF) in their forecasting systems, resulting in major improvements to the prediction of the weather and climate from oceanic and atmospheric models. The assimilation technique is also leading to better use of expensively acquired satellite and in-situ data and improving ocean and atmosphere forecasts used by shipping and civil aviation, energy providers, insurance companies, the agriculture and fishing communities, food suppliers and the general public. The impact of the correction procedure is also important for anticipating and mitigating hazardous weather conditions and the effects of long-term climate change.
The School of Mathematics at Cardiff University has developed important statistical and mathematical models for forecasting consumer buying behaviour. Enhancements to classical models, inspired by extensively studying their statistical properties, have allowed us to exploit their vast potential to benefit the sales and marketing strategies of manufacturing and retail organisations. The research has been endorsed and applied by Nielsen, the #1 global market research organisation that provides services to clients in 100 countries. Nielsen has utilised the models to augment profits and retain their globally leading corporate position. This has led to a US$30 million investment and been used to benefit major consumer goods manufacturers such as Pepsi, Kraft, Unilever, Nestlé and Procter & Gamble. Therefore the impact claimed is financial. Moreover, impact is also measurable in terms of public engagement since the work has been disseminated at a wide range of national and international corporate events and conferences. Beneficiaries include Tesco, Sainsbury's, GlaxoSmithKline and Mindshare WW.
Targeted Projection Pursuit (TPP) — developed at Northumbria University — is a novel method for interactive exploration of high-dimension data sets without loss of information. The TPP method performs better than current dimension-reduction methods since it finds projections that best approximate a target view enhanced by certain prior knowledge about the data. "Valley Care" provides a Telecare service to over 5,000 customers as part of Northumbria Healthcare NHS Foundation Trust, and delivers a core service for vulnerable and elderly people (receiving an estimated 129,000 calls per annum) that allows them to live independently and remain in their homes longer. The service informs a wider UK ageing community as part of the NHS Foundation Trust.
Applying our research enabled the managers of Valley Care to establish the volume, type and frequency of calls, identify users at high risk, and to inform the manufacturers of the equipment how to update the database software. This enabled Valley Care managers and staff to analyse the information quickly in order to plan efficiently the work of call operators and social care workers. Our study also provided knowledge about usage patterns of the technology and valuably identified clients at high risk of falls. This is the first time that mathematical and statistical analysis of data sets of this type has been done in the UK and Europe.
As a result of applying the TPP method to its Call Centre multivariate data, Valley Care has been able to transform the quality and efficiency of its service, while operating within the same budget.
This impact case study is based on a Knowledge Transfer Partnership (KTP) between the School of Mathematics, Statistics and Actuarial Science, University of Kent and KROHNE Ltd, a world leading manufacturer of industrial measuring instruments. These precision instruments (typically flow meters and density meters) need to be calibrated accurately before being used and this is an expensive and time-consuming process.
The purpose of the KTP was to use Bayesian methodology developed by Kent statisticians to establish a novel calibration procedure that improves on the existing procedure by incorporating historical records from calibration of previous instruments of the same type. This reduces substantially the number of test runs needed to calibrate a new instrument and will increase capacity by up to 50%.
The impact of the KTP, which was graded as `Outstanding', has been to change the knowledge and capability of the Company, so that they can improve the performance of their manufacturing process by implementing this novel calibration method. This has been achieved by adapting the underpinning Kent research to the specific context of the calibration problem, by running many calibrations to demonstrate the effectiveness of the method in practice, and by supporting the implementation of the new calibration method within the Company's core software.
Moreover, the project has changed the Company's thinking on fundamental science, particularly industrial mathematics. The value of historical data, and the usefulness of Bayesian methods, is now widely appreciated and training for staff in Bayesian Statistics is being introduced. Thus the project has not only changed the protocols of the Company, it has also changed their practice.
The research improves digital data archives by embedding computation into the storage controllers that maintain the integrity of the data within the archive. This opens up a number of possibilities:
This has impact on three different classes of beneficiary:
This impact case is based on economic impact through improved forecasting technology. It shows how research in pattern recognition by Professor Henry Wu at the School of Electrical Engineering and Computer Science led to significantly improved accuracy of daily national gas demand forecasting by National Grid plc. The underpinning research on predicting non-linear time series began around 2002 and the resulting new prediction methodology is applied on a daily basis by National Grid plc since December 2011. The main beneficiaries from the improved accuracy (by 0.5 to 1 million cubic meters per day) are UK gas shippers, who by conservative estimates save approximately £3.5M per year. Savings made by gas shippers benefit the whole economy since they reduce the energy bills of end users.
KCL research played an essential role in the development of data provenance standards published by the World Wide Web Consortium (W3C) standards body for web technologies, which is responsible for HTTP, HTML, etc. The provenance of data concerns records of the processes by which data was produced, by whom, from what other data, and similar metadata. The standards directly impact on practitioners and professional services through adoption by commercial, governmental and other bodies, such as Oracle, IBM, and Nasa, in handling computational records of the provenance of data.
Research carried out at Birkbeck's Department of Computer Science and Information Systems since 2000 has produced techniques for the management and integration of complex, heterogeneous life sciences data not previously possible with large-scale life sciences data repositories. The research has involved members of the department and researchers from the European Bioinformatics Institute (EBI) and University College London (UCL) and has led to the creation of several resources providing information about genes and proteins. These resources include the BioMap data warehouse, which integrated the CATH database — holding a classification of proteins into families according to their structure, the Gene3D database — holding information about protein sequences, and other related information on protein families, structures and the functions of proteins such as enzymes. These resources are heavily utilised by companies worldwide to explore relationships between protein structure and protein function and to aid in drug design.
Open Data has lowered barriers to data access, increased government transparency and delivered significant economic, social and environmental benefits. Southampton research and leadership has led to the UK Public Data Principles, which were enshrined in the UK Government Open Data White Paper, and has led to data.gov.uk, which provides access to 10,000 government datasets. The open datasets are proving means for strong citizen engagement and are delivering economic benefit through the £10 million Open Data Institute. These in turn have placed the UK at the forefront of the global data revolution: the UK experience has informed open data initiatives in the USA, EU and G8.