Log in
Impact: The underpinning research resulted in an innovative Worst-Case Execution time (WCET) analysis technology now called RapiTime, which was transferred to industry via a spin-out company, Rapita Systems Ltd. The technology enables companies in the aerospace and automotive industries to reduce the time and cost required to obtain confidence in the timing correctness of the systems they develop. The RapiTime technology has global reach having been deployed on major aerospace and automotive projects in the UK, Europe, Brazil, India, China, and the USA. Key customers include leading aerospace companies such as: [text removed for publication]; as well as major automotive suppliers: [text removed for publication]. Since 2008, Rapita has won export orders to China worth over [text removed for publication]. From 2008/9 to 2011/12, the company's annual revenues have more than doubled from [text removed for publication] to over [text removed for publication]. As of August 2013, Rapita employs [text removed for publication] people at its offices in York and Cambridge.
High Performance Computing (HPC) is a key element in our research. The Particle Physics Group has accumulated expertise in the development and optimisation of coding paradigms for specific supercomputer hardware. Our codes are deployed on supercomputers around the world, producing high-profile research results. We have developed a simulation environment, BSMBench, that is, on the one hand, flexible enough to run on major supercomputer platforms and, on the other hand, pushes supercomputers to their limits. These codes are used by IBM and Fujitsu Siemens for benchmarking their large installations and mainframes. The third party company BSMBench Ltd has commercialised the usage of our codes for analysing and optimising HPC systems of small and medium-sized enterprises.
A two-dimensional flood inundation model called LISFLOOD-FP, which was created by a team led by Professor Paul Bates at the University of Bristol, has served as a blueprint for the flood risk management industry in the UK and many other countries. The documentation and published research for the original model, developed in 1999, and the subsequent improvements made in over a decade of research, have been integrated into clones of LISFLOOD-FP that have been produced by numerous risk management consultancies. This has not only saved commercial code developers' time but also improved the predictive capability of models used in a multimillion pound global industry that affects tens of millions of people annually. Between 2008 and 2013, clones of LISFLOOD-FP have been used to: i) develop national flood risk products for countries around the world; ii) facilitate the pricing of flood re-insurance contracts in a number of territories worldwide; and iii) undertake numerous individual flood inundation mapping studies in the UK and overseas. In the UK alone, risk assessments from LISFLOOD-FP clones are used in the Environment Agency's Flood Map (accessed on average 300,000 times a month by 50,000 unique browsers), in every property legal search, in every planning application assessment and in the pricing of the majority of flood re-insurance contracts. This has led to more informed and, hence, better flood risk management. A shareware version of the code has been available on the University of Bristol website since December 2010. As of September 2013, the shareware had received over 312 unique downloads from 54 different countries.
In the 1990s Dr D Moore, who has extensive experience in fluid dynamics, worked with collaborators at the US Naval Research Laboratory (NRL) on parallelising an ocean modelling code. This resulted in the Navy Layered Ocean Model (NLOM) and later the Hybrid Coordinate Ocean Model (HYCOM). NLOM and HYCOM, which were/are distributed through the NRL and HYCOM consortium, are open access ocean modelling codes that are used to forecast ocean currents. They have proved particularly impactful for the forecasting of ocean oil spills and the corresponding management of the environmental risk. NLOM and/or HYCOM have been used extensively in the Deepwater Horizon oil spill in 2010 as well as the Montara Well Release oil spill in Australia in 2009, providing valuable forecasts to assist with the response to the disasters.
The impact of the research is evident in two areas of software engineering practice connected through software fault-proneness: (i) improper use of `design patterns', recognised reusable templates for how to design code; and (ii) the real benefits of `refactoring', a technique whereby code is intentionally changed by a developer to improve its efficiency and/or make it easier to read. Application of the research findings has led to significant impacts on software development at BancTec Ltd., a medium-sized, international IT company which, as a result, has changed its practices, challenging established approaches in industrial IT. The research has had, and continues to have, direct and sustained impact at BancTec through changed commercial practice and raised awareness of internal standards; this has led to increased training of developers and rollout of new internal software development standards in the UK and India, and as a template world-wide for 2,000 employees in 50 countries.
This case study describes the development, application and commercialisation of an open source tool, BSMBench that enables supercomputer vendors and computing centres to benchmark their system's performance. It comprehensively informs the design and testing of new computing architectures well beyond other benchmarking tools on the market, such as Linpack.
The significance of our code is that, unlike other benchmarking tools, it interpolates from a communication- to a computation-dominated regime simply by varying the (physics) parameters in the code, thus providing a perfect benchmark suite to test the response of modern multi-CPU systems along this axis. The impact of this work has great reach: a start-up company, BSMbench Ltd, has been founded to develop and commercialise the software; adopters have included IBM - one of the giants of the supercomputer world (where it uncovered errors in their compilers); it has been deployed by Fujitsu to validate its systems, by HPC Wales, a multi-site, commercially focussed national computer centre and by Transtec, an HPC company employing over 150 staff; and tutorial articles about BSMBench have appeared in magazines such as Linux Format.
This software tool spawned from our research into "Beyond the Standard Model" (BSM) physics which aims to understand the Higgs mechanism in particle physics at a fundamental level. This involved simulating quantum field theories using bespoke code on some of the fastest supercomputers on the planet.
A computer program, CASTEP, has been developed to use quantum mechanics to calculate the structure and properties of materials. The code is distributed commercially via Accelrys Inc. with sales, for example, in the automotive, electronics and pharmaceutical industries in excess of £1m per year since 1998, accelerating to over £2.5m per year recently and total sales (late 2012) exceeding $30m. Commercial applications include designing new battery materials and electrodes to improve the performance of electric cars (Toyota), integrating organic electronic materials for light-weight flexible displays (Sony), and developing new catalysts for hydrogen-powered fuel cells (Johnson-Matthey).
Graph-theoretic and mathematically rigorous algorithmic methods developed at the University of Hertfordshire have improved the applicability of compiler technology and parallel processing. A compiler developed in the course of a ten-year research programme at the university has been successfully applied to a number of commercial problems by re-purposing the research tool. NAG Ltd has adapted the tool into a commercial product [text removed for publication]. Numerous applications of the mathematical methods (such as type-flow graphs used conjointly for correctness and optimisation) have been deployed by industry (including SAP, SCCH, German Waterways Board) working closely with the university.
Loudness is the subjective magnitude of a sound as perceived by human listeners and it plays an important role in many human activities. It is determined jointly by the physical characteristics of a sound and by characteristics of the human auditory system. A model for predicting the loudness of sounds from their physical spectra was developed in the laboratory of Professor Brian Moore with support from an MRC programme grant.
The model formed the basis for an American National Standard and is currently being prepared for adoption as a standard by the International Organization for Standardisation (ISO). In addition, the model has been widely used in industry worldwide for prediction of the loudness of sounds, for example: noise from heating, ventilation and air-conditioning; inside and outside cars, and from aircraft; and from domestic appliances and machinery.
The High Performance Computing (HPC) application code HELIUM, developed at Queen's University Belfast to assist the development of attosecond technology, has impacted on the provision of public services through guiding procurement and acceptance testing of the high-performance computer facility HECToR. This facility was funded by UK Government with a total expenditure of £113M during 2007 - 2013. The HELIUM code was used for procurement and acceptance testing for the initial HECToR service in 2007 (Phase 1, 11k cores), and its upgrades in 2009 (Phase 2a, 22k cores), 2010 (Phase 2b, 44k cores) and 2011 (Phase 3, 90k cores). The HELIUM code was particularly invaluable in demonstrating that the Phase 2b and Phase 3 systems perform correctly at pre-agreed performance levels, since this code can be adapted to run for several hours over >80k cores.