European teams across scientific disciplines come together to advance their work with GPU acceleration
Starting on April 8th, ten teams worked diligently with mentors for a five day GPU Hackathon at Jülich Supercomputing Centre (JSC). The event received a record number of applications, underscoring the increasing interest and expectation that GPUs will be a key technology for future supercomputers. Being able to use GPUs is thus becoming a must for computational scientists.
The participating teams came from countries such as Germany, France, and even Mexico and represented various fields spanning atmospheric and earth sciences, biomedicine, and molecular dynamics.
Team SimLab Climate Science from JSC worked on JUelich RApid Spectral SImulation Code or ‘JURASSIC,’ a fast radiative transfer model for the infrared spectral region in the Earth's atmosphere, while another German team from Climate Service Center Germany (GERICS) collaborated to refactor the hydrodynamical core subroutine of their regional climate model REMO and identify further bottlenecks and subroutines suited to porting to GPUs.
During the GPU Hackathon, different computational fluid dynamics applications were explored, including DYNAMICO, an application that solves Navier-Stokes equations to study flows in the atmosphere and JADIM, a software developed by Institut de Mecanique des Fluides de Toulouse (IMFT) that also solves Navier-Stokes equations to study multi-phase flows. Navier-Stokes equations describe the behavior of fluids and are particularly useful for modeling the weather or ocean currents, assisting in aircraft and vehicle design, even explaining the mechanics of blood flow. Solving this equation remains one of the most challenging problems in numerical physics today.
Other applications that teams collaborated on focused on quantum chromodynamics (QCD), used for simulating fundamental forces in physics, molecular dynamics as applied to theoretical chemistry, and multiparticle collision dynamics (MPC), used for investigating the properties of mesoscale objects such as colloids or polymer solutions.
Relevant to the medical arena, NAStJA is a parallel framework for stencil code algorithms that is being developed to simulate Cellular Potts Models used for modeling the collective behavior of cellular structures. Additionally, BRAIMMU is a code that aims to simulate the immune system of the brain during the course of Alzheimer's disease.
At the conclusion of the event, teams reported on their specific progress. In their self-assessment they all made significant progress. Some teams were able to run selected kernels and start code refactorization, while others managed to get their full applications running with some parts accelerated on multiple GPUs. TheBRAIMMU team worked to map out a completely new data layout that will make future GPU parallelization straight-forward with an estimated 20X speedup. Another team working on a QCD application managed to increase performance from 1 to almost 7 TFlop/s when using 16 NVIDIA Tesla V100 GPUs instead of 16 KNL processors.
All this progress would not have been possible without the mentors that supported the different teams. Many thanks go to them as well as to their home organizations: CSCS, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), HPE, Institute for Development and Resources in Intensive Scientific Computing (IDRIS), JSC, NVIDIA, and the University of Sheffield.
Additional GPU Hackathons are scheduled throughout the year. For further information and to apply, visit https://www.olcf.ornl.gov/for-users/training/gpu-hackathons/.