01
24
2020

From the Front Row at WACCPD ‘19

The ever-increasing heterogeneity in supercomputing applications has given rise to complex compute node architectures offering multiple, heterogeneous levels of massive parallelism. Exploiting the maximum available parallelism out of such systems necessitates sophisticated programming approaches that can provide scalable and portable solutions without compromising on performance.

Recent architectural trends indicate that future exascale machines will rely heavily on accelerators for performance; however, a programmer’s expectation from the scientific community is to deliver solutions that would allow maintenance of a single code base whenever possible to avoid duplicate effort. Software abstraction-based programming models, such as OpenACC and OpenMP, have been raising the abstraction of code to reduce the burden on the programmer while improving productivity.

Addressing this theme, the recently concluded Sixth Workshop on Accelerator Programming using Directives (WACCPD2019), held in conjunction with Supercomputing 2019, showcased all aspects of heterogeneous systems to focus on key areas that will facilitate the transition to accelerator-based high-performance computing (HPC). 

Chaired by Sandra Wienke from RWTH Aachen University, and Sridutt Bhalachandra from Lawrence Berkeley National Laboratory (LBNL), WACCPD 19 leveraged the expertise of a diverse community of thought leaders in HPC to deliver a stellar program that discussed innovative high-level language features, lessons learned while using directives to migrate scientific legacy code to parallel processors, compilation and runtime scheduling techniques among others.

Kicking off the workshop was Nicholas J. Wright, the Perlmutter chief architect and the advanced technologies group lead at the National Energy Research Scientific Computing (NERSC) center. His keynote "Perlmutter- A 2020 Pre-Exascale GPU-accelerated System for NERSC: Architecture and Application Performance Optimization" highlighted the architecture of the new machine and how it was optimized to meet the performance and usability goals of NERSC’s more than 7000 users. Expected to deliver in 2020, Perlmutter will be a Cray pre-exascale system comprising both AMD CPU-only and GPU-accelerated nodes using NVIDIA Tensor Core GPUs, with a performance of more than 3 times Cori, NERSC’s current platform.

Of the 13 high-quality full paper submissions, seven peer-reviewed papers were accepted for presentation at WACCPD19 ranging in topics from porting scientific applications to heterogeneous architectures using directives-based programming models such as OpenACC and OpenMP to proposing directive-based programming for math libraries to addressing performance portability for heterogeneous architectures using frameworks like Kokkos. The best paper award went to researchers Hongzhang Shan and Zhengji Zhao from Lawrence Berkeley National Lab, and Marcus Wagner from Cray for their work "Accelerating the Performance of Modal Aerosol Module (MAM) of E3SM Using OpenACC."

Best Paper Award at WACCPD19
Best Paper Award accepted by Zhengji Zhao from Lawrence Berkeley National Lab

 

An invited talk, "The SPEC ACCEL Benchmark – Results and Lessons Learned" by Robert Henschel, director of Research Software and Solutions at Indiana University, was also featured. First released in 2014 by the High-Performance Group (HPG) of the Standard Performance Evaluation Corporation (SPEC),  the benchmark suite initially consisted of OpenCL and OpenACC. In 2017, it was expanded to include an OpenMP 4.5 target offload component. The talk introduced the benchmark, showed results and talked about the lessons learned from developing and maintaining a directive-based benchmark, as well as the challenges of creating a follow-on suite.

The workshop concluded with a panel "Convergence, Divergence, or New Approaches? – The Future of Software-Based Abstractions for Heterogeneous Supercomputing" moderated by Fernanda Foertter from NVIDIA. The panelists shared their insights on the future of software-based abstractions in heterogeneous HPC, pondering such questions as:  Will they converge, diverge, or will there be new approaches that would be needed? What makes a good accelerator programming model? Is there a measure for this “goodness”?  During the panel, the audience was encouraged to challenge the panelists with their questions or share their insights.

Today’s HPC developers face a serious dilemma to decide on processor architecture and parallel programming paradigm. Software-based abstractions for accelerators give hope to lift this burden but also leave the developers spoilt for choice – from open standards over Domain-specific languages (DSLs) to proprietary approaches, and from open-source to closed-source solutions. The uncertainty surrounding the usability, portability, and interoperability of these abstractions in the future makes it unsettling for the developers. It is imperative that we try and resolve these shortcomings for their greater adoption.

The proceedings of the workshop will be published in Springer Lecture Notes in Computer Science. WACCPD hopes to return in 2020 at Supercomputing 2020. Visit www.waccpd.org for more details about the workshop series.

Author

Sridutt Bhalachandra
Sridutt Bhalachandra
Sridutt Bhalachandra is an HPC Architecture and Performance Engineer at Lawrence Berkeley National Laboratory. He received his Ph.D. from the Computer Science department at the University of North Carolina-Chapel Hill (UNC) in 2018. While at UNC, he was a research assistant at the Renaissance Computing Institute (RENCI). He was a postdoctoral appointee in the Mathematics and Computer Science Division at Argonne National Laboratory before joining as a staff member in the Advanced Technologies Group (ATG) [NERSC] at Berkeley Lab. Previously, he worked as a Systems Engineer at Infosys Labs, Bangalore. He also was a summer intern at Lawrence Livermore and Sandia National Laboratories in 2015 and 2016, respectively.