Discover the OpenACC talks, training and posters featured at GTC Digital
Accelerated computing is fueling some of the most exciting scientific discoveries today. For scientists and researchers seeking faster application performance, OpenACC’s directive-based programming model provides a simple yet powerful approach to accelerators without significant programming effort. With OpenACC, a single version of the source code will deliver performance portability across platforms.
This year’s NVIDIA GTC Digital conference is offering a library of OpenACC talks and posters from academic scholars and industry luminaries that you can view on demand. The best part? Registration is free of charge!
Explore how OpenACC is being used across different disciplines and applications, including:
Toward an Exascale Earth System Model with Machine Learning Components: An Update [S21834]
Listen to Richard Loft, Director of Technology Development, Computational and Information Systems Laboratory at the U.S. National Center for Atmospheric Research (NCAR) and learn about an ambitious project that's combining exascale GPU computational power with machine-learning algorithms to radically improve weather and climate modeling. Having achieved performance portability for a standalone version of the Model for Prediction Across Scales-Atmosphere (MPAS-A) on heterogeneous CPU/GPU architectures across thousands of GPUs using OpenACC, NCAR has launched an effort to port the MOM-6 Ocean Model and are evaluating replacing atmospheric parameterizations with machine-learned emulators, including the atmospheric surface layer, cloud microphysics, and aerosol parameterizations.
Fully Exploiting a GPU Supercomputer for Seismic Imaging [S21451]
Total SA will discuss how they ported modern seismic applications like Reverse Time Migration, Full Wave Inversion, and One-Way Migration to the GPU-accelerated Pangea III supercomputer. This session will explain decisive code transformations to take full advantage of the computing power of GPUs, present different CUDA optimization techniques to achieve an asynchronous implementation entirely overlapping communications with the propagation kernels, compare OpenACC and CUDA programming models, and outline a new hybrid GPU-CPU data compression algorithm that vastly outperforms the CPU version.
Toward Industrial LES/DNS in Aeronautics: Leveraging OpenACC for Massively Parallel CPU+GPU Simulations [S21958]
Join David Gutzwiller, software engineer and head of HPC at Numeca, as he presents recent advances toward industrial LES/DNS computational fluid dynamics within the scope of the EU TILDA (Towards Industrial LES/DNS in Aeronautics) project. The TILDA project aims to complete high-fidelity industrial LES/DNS simulations with upwards of 1 billion degrees of freedom, with a turnaround time on the order of one day. During this session, he will describe the development of FineFR, a high-order CFD solver supporting heterogeneous CPU+GPU architectures, emphasize the highly tuned OpenACC implementation, and present benchmark data and demonstration computations from the OLCF Summit Supercomputer.
Directive-Based GPU Programming with OpenACC [CWE21815]
Listen to OpenACC experts as they present an overview of the OpenACC programming model, and answer questions from the community around how to start accelerating scientific codes on GPUs, continue optimizing GPU code, start teaching OpenACC, host or participate in a hackathon, and more.
Multi-GPU Programming with Message-Passing Interface [S21067]
Watch this recorded presentation with Jiri Kraus, senior developer technology engineer at NVIDIA to learn how to program multi-GPU systems or GPU clusters using the message-passing interface (MPI) and OpenACC or NVIDIA CUDA. The tutorial will introduce MPI and how it can be combined with OpenACC or CUDA, then cover advanced topics like CUDA-aware MPI and how to overlap communication with computation to hide communication times.