Benchmark Testing in High Performance Computing
Submitting Institution
Plymouth UniversityUnit of Assessment
Mathematical SciencesSummary Impact Type
TechnologicalResearch Subject Area(s)
Physical Sciences: Atomic, Molecular, Nuclear, Particle and Plasma Physics, Other Physical Sciences
Information and Computing Sciences: Computation Theory and Mathematics
Summary of the impact
High Performance Computing (HPC) is a key element in our research. The Particle Physics Group
has accumulated expertise in the development and optimisation of coding paradigms for specific
supercomputer hardware. Our codes are deployed on supercomputers around the world,
producing high-profile research results. We have developed a simulation environment, BSMBench,
that is, on the one hand, flexible enough to run on major supercomputer platforms and, on the
other hand, pushes supercomputers to their limits. These codes are used by IBM and Fujitsu
Siemens for benchmarking their large installations and mainframes. The third party company
BSMBench Ltd has commercialised the usage of our codes for analysing and optimising HPC
systems of small and medium-sized enterprises.
Underpinning research
Explaining the origin of electroweak symmetry breaking is a theoretical problem of utmost
importance and a fundamental step in understanding the new data from the Large Hadron Collider
at CERN that is trying to verify the existence of the Higgs particle. `Technicolor' is the framework
according to which Electroweak symmetry breaking is due to the breaking of the chiral symmetry in
a new strong interaction. The model proposes a different answer to the origin of mass, by means of
a new mechanism to generate mass for the leptons. These ideas are inspired by the fact that a
similar mechanism is already at work in the theory of the strong interactions, i.e. Quantum
Chromodynamics (QCD). A fundamental requirement for any theory Beyond the Standard Model is
that the framework does not spoil any lower energy prediction, i.e., that it is compatible with current
observations. This is a severe constraint, which in Technicolor is implemented by the mechanism
of walking, i.e., the slow running of the gauge coupling in an intermediate range of energies. This
happens for near-conformal gauge theories. The question then becomes: is there a near-conformal
gauge theory that can account for the observed electroweak symmetry breaking?
Numerical simulations of the theory, regularized on a space-time lattice, performed using Monte
Carlo techniques, are the best tools for quantitative calculations. Given the experience of similar
calculations in QCD accumulated over the last 30 years, it has been realised that this type of
program would require a flexible, scalable code with the ability to run on state-of-the-art
supercomputers.
Dr Antonio Rago and Dr Agostino Patella have been instrumental, as part of the collaborative work
of particle physics groups at Plymouth University, University of Swansea, University of Edinburgh,
CERN, and CP3 in Odense, in creating the innovative simulation suite HiRep to study a large class
of gauge theories that inform particle physics experiments. The programs are written in standard C,
and are parallelized using standard MPI and OpenMP libraries. The code is portable on a variety of
architectures, and has been tested and used on several machines, from high-performance
PC clusters to IBM BlueGene/L and P, and the latest evolution BlueGene/Q. Plymouth University
has a strong footprint in this project, since two of three core developers of the HiRep suite (Rago,
Patella) are based at the university. Six indicative publications that made use of HiRep are listed in
the next section.
At the moment, HiRep is the only code available in the particle physics community that simulates
gauge theories with an arbitrary number of colours and different fermion representation, making it
the most versatile code to investigate the possibility of finding New Physics as an extension of the
Standard Model. These numerical studies require extremely demanding calculations, carried out
on the most powerful computers. We have worked in close collaboration with IBM to optimise our
code for new generation machines and the evolution of future hardware. For these purposes, we
have developed a portable benchmarking suite, BSMBench, based on HiRep.
References to the research
Rago and Patella are key developers for the code HiRep, which underpins this impact case study.
With the help of HiRep, some key questions in Particle Physics have been answered, and led to
high-profile publications. Below, we list six indicative publications that have made use of HiRep.
The authors are listed below in alphabetical order. Bold font indicates authors from our unit of
assessment. All publications are in international, top-level, and peer reviewed Particle Physics
journals. In this sector, the impact factors of high-profile journals communicating original research
results range from approximately 3 to 7. According to the classification scheme SPIRES, of the
high-energy physics community, articles that attract 100-249 citations are `very well known papers',
while articles cited 50-99 times rank as `well-known papers'.
[1] F. Bursa, L. Del Debbio, D. Henty, E. Kerrane, B. Lucini, A. Patella, et al.,
Improved Lattice Spectroscopy of Minimal Walking Technicolor,
Phys. Rev. D84 (2011) 034506, 17 citations so far, impact factor of journal: 4.964
[2] B. Lucini, G. Moraitis, A. Patella and A. Rago,
A Numerical investigation of orientifold planar equivalence for quenched mesons,
Phys. Rev. D82 (2010) 114510, 5 citations so far, impact factor of journal: 4.964
[3] L. Del Debbio, B. Lucini, A. Patella, C. Pica, A. Rago,
The infrared dynamics of Minimal Walking Technicolor,
Phys. Rev. D82 (2010) 014510, 65 citations so far, impact factor of journal: 4.964
[4] L. Del Debbio, B. Lucini, A. Patella, C. Pica and A. Rago,
Mesonic spectroscopy of Minimal Walking Technicolor,
Phys. Rev. D82 (2010) 014509, 43 citations so far, impact factor of journal: 4.964
[5] L. Del Debbio, A. Patella and C. Pica,
Higher representations on the lattice: Numerical simulations. SU(2) with adjoint fermions,
Phys. Rev. D81 (2010) 094503, 129 citations so far, impact factor of journal: 4.964
[6] L. Del Debbio, B. Lucini, A. Patella, C. Pica, A. Rago,
Conformal versus confining scenario in SU(2) with adjoint fermions,
Phys. Rev. D80 (2009) 074507, 79 citations so far, impact factor of journal: 4.964
Details of the impact
In the area of STEM subjects, the general belief is that the trend of the steadily increasing
importance of High Performance Computing will continue for decades. It is likely that the highest
levels of computing performance will be achieved through distributed hardware resources, such as
computer nodes or web-data storage. The performance of cloud computing critically hinges on
both the performance of individual nodes and the ability to pass information between them and, to
a certain extent, on the performance of the network between head node and end user.
Benchmark testing for supercomputer architectures, such as LINPACK, has been available for
many years. Generically, these benchmark suites do not differentiate between the relative
importance of communication (RAM, inter-core, inter-node, network) or local computational power.
We have developed a benchmarking suite, BSMBench, which bridges this gap.
The unique feature of our benchmarking tool is that it allows the relative weight of memory
communication against single-core computational resource to be changed. This makes BSMBench
a perfect tool for comparing different computing architectures, which may range from home
computers to large cluster installations. This versatility makes BSMBench an attractive tool for a
wide range of end-users. The code BSMBench is the only publicly available suite based on a BSM
Particle Physics simulation environment. By changing fermion representation and colour number,
BSMBench offers the unique possibility of changing the relative weight of memory communication
against CPU computations.
IBM (see source [1]) is using the code suite BSMBench to both assess the performance of new
generation machines, including BlueGene/Q, and evolve future hardware.
A third-party company, BSMBench Ltd [2,3] (featuring the name of our code suite), was created in
2012 to promote and make the best use of know-how gained during the development of the code
BSMBench. Currently, this company is developing general-purpose parallel software based on
BSMBench that can be used in High Performance Computing applications in the finance,
aerospace engineering, weather forecasting and oil extraction sectors, among others.
In early 2013, Fujitsu [4] started collaborating with the developer of the code suite BSMBench and
the company BSMBench Ltd to adapt our code suite and create a benchmarking tool on
mainframes used for virtualization. The main idea is to unravel unused resources during a
virtualization run, and to understand the underpinning restrictions of their usage.
Fujitsu is enhancing the BSMBench code suite by embedding a facility for measuring specific
hardware metrics, such as the L2/L3 filling level and the throughput cache.
BSMBench has been made available to third parties and the general public via GitHub [5]. This has
been announced to the computing community in a presentation at the International
Supercomputing Conference 2012 [6]. BSMBench has also featured in the IT magazine `Linux
Format', and been distributed to its readers via more than 20,000 copies on DVD [7,8,9].
The HPC-assisted research already carried out by our group was the subject of an INTEL case
study in 2010 [10]. This case study was also presented at the 21st meeting of the Machine
Evaluation Workshop (MEW) organised by STFC in Runcorn, in November 2010. Here, the
Plymouth HPC installation served as a showcase illustrating how supercomputer environments can
be efficiently used for complex engineering problems.
Sources to corroborate the impact
[1] Emerging Solutions Executive, Computational Science Center, IBM T.J. Watson Research,
Cambridge Research Center, Cambridge, MA USA. E-mail: kjordan@us.ibm.com.
[2] Company Director, BSMbench Ltd., Dylan Thomas Centre, 1 Somerset Place, Swansea, SA1
1RR. E-mail: m.heuberger@bsmbench.org.
[3] http://www.cdrex.com/bsmbench-limited-8459844.html
[4] Director of ARCCA, Cardiff University, Redwood Building (Tower), King Edward VII Avenue,
Cardiff, Wales, CF10 3NB. E-mail: GuestMF@cardiff.ac.uk.
[5] http://www.bsmbench.org
[6] http://www.isc-events.com/isc12_ap/speakerdetails.php?t=speaker&o=10426&a=select&ra=search
[7] Linux Format, Issue 163, November 2012.
[8] Editor, Linux Format, 30 Monmouth Street, Bath BA1 2BW, E-mail:
graham.morrison@futurenet.co.uk.
[9] Editor, Linux Pro, Via Torino, 51, I-20063 Cernusco sul Naviglio (MI), Italy. E-mail:
massimilianozagaglia@sprea.it.
[10] Intel Case Study: Exploring new energy sources and the fundamental structure of matter,
Machine Evaluation Workshop (MEW), STFC in Runcorn, November 2010.