GTC 2022 conference offers several opportunities to learn more about the intersection of HPC, AI and Data Science. Browse through a variety of talks, tutorials, and posters across topics such as OpenACC and programming languages, developer tools, and industry-specific research and applications. Learn more.


GTC 2022 Keynote

Tuesday, September 20 | 8:00 - 9:30 AM PDT 

Don't miss this keynote from NVIDIA Founder and CEO, Jensen Huang, as he speaks on the futre of computing.


Accelerating a New Path to Innovation, Efficiency, and Discovery

Ian Buck, Vice President of Hyperscale and HPC, NVIDIA

Tuesday, September 20 | 10:00 - 10:50 AM PDT 

Accelerated computing is transforming the world of AI and computational science, from the data center to the cloud and to the edge. NVIDIA’s Ian Buck, vice president of hyperscale and HPC, will provide an in-depth overview of the latest news, innovations, and technologies that will help companies, industries, and nations reap the benefits of AI supercomputing.

CUDA: New Features and Beyond 

Stephen Jones (SW), CUDA Architect, NVIDIA

Tuesday, September 20 | 11:00 - 11:50 AM PDT 

Learn about the latest additions to the CUDA platform, Language and Toolkit, and what the new Hopper GPU architecture brings to CUDA. Presented by one of the architects of CUDA, this engineer-focused session covers all the latest developments for NVIDIA's GPU developer ecosystem as well as looking ahead to where CUDA will be going over the coming year.

A Deep Dive into DPU Computing: Addressing HPC/AI Performance Bottlenecks

Gilad Shainer, SVP Networking, NVIDIA

Tuesday, September 20 | 11:00 - 11:50 AM PDT 

AI and scientific workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and highly parallelized algorithms. As these computing requirements continue to grow, the traditional GPU-CPU architecture further suffers from imbalance computing, data latency, and lack of parallel or pre-data-processing. The introduction of the data processing unit (DPU) brings a new tier of computing to address these bottlenecks, and to enable, for the first-time, compute overlapping and nearly zero communication latency. We'll dive deep into DPU computing, and how it can help address long-lasting performance bottlenecks. We'll also present performance results of a variety of HPC and AI applications.

A Deep Dive into the Latest HPC Software

Timothy Costa, Director, HPC & Quantum Computing Product, NVIDIA

Tuesday, September 20 | 12:00 - 12:50 PM PDT 

Take a deep dive into the latest developments in NVIDIA software for HPC applications, including a comprehensive look at what’s new in programming models, compilers, libraries, and tools. We'll cover topics of interest to HPC developers, targeting traditional HPC modeling and simulation, HPC+AI, scientific visualization, and quantum computing.

Tackling New Architectures and Workflows with the Latest CUDA Developer Tools

Jackson Marusarz, Technical Product Manager, NVIDIA

Tuesday, September 20 | 12:00 - 12:50 PM PDT 

The upcoming Hopper-based platforms, as well as the Grace Hopper Superchip are exciting developments for high-performance computing. Taking advantage of this new hardware performance takes great software, and CUDA developer tools are here to help. We'll give a brief overview of the tools available for free to developers, then detail the newest features and explain how they help users identify performance and correctness issues, where they are occurring, and some options to fix them. We'll pay specific attention to features supporting new architectures. CUDA developer tools are designed in lockstep with the CUDA ecosystem, including hardware and software. With new technologies like the Grace Hopper Superchip, visibility and optimization of the entire platform are key to unleashing the next level of accelerated computing performance. This presentation will prepare you for that move to the leading edge.

Defining the Quantum-Accelerated Supercomputer

Timothy Costa, Director, HPC & Quantum Computing Product, NVIDIA

Tuesday, September 20 | 1:00 - 1:50 PM PDT

Quantum computing has the potential to offer giant leaps in computational capabilities, impacting a huge range of industries from drug discovery to portfolio optimization. Realizing these benefits requires pushing the boundaries of quantum information science now in developing algorithms, researching more capable quantum processors, and creating tightly integrated quantum-classical systems and tools. We'll review these challenges facing quantum computing, offer insight into how GPU supercomputing can help move industries toward quantum advantage, and discuss the latest developments in software and systems for tightly integrated quantum-classical computing.

NVIDIA's Earth-2: Progress and Challenges Towards Building a Digital Twin of the Earth for Weather and Climate

Anima Anandkumar, Senior Director of ML Research, NVIDIA and Karthik Kashinath, Principal Engineer and Scientist, AI-HPC and Engineering Lead, Earth-2, NVIDIA

Tuesday, September 20 | 2:00 - 2:50 PM PDT 

NVIDIA's recently launched Earth-2 initiative aims to build digital twins of the Earth to address one of the most pressing challenges of our time, climate change. Earth-2 aims to improve our predictions of extreme weather, projections of climate change, and accelerate the development of effective mitigation and adaptation strategies -- all using the most advanced and scientifically principled machine learning methods at unprecedented scale. Combining accelerated computing with physics-informed machine learning at scale, on the largest supercomputing systems today, Earth-2 will provide actionable weather and climate information at regional scales. Here we present progress and challenges towards building digital of the Earth at extreme scale. Highlights include global extreme weather emulation at high resolution with unprecedented accuracy, speed and scale, calibrated ensemble forecasting, engineering innovations to address massive scale, and progress towards climate emulation.

Developing HPC Applications with Standard C++, Fortran, and Python 

Jeff Larkin, HPC Architect, NVIDIA

Wednesday, September 21 | 9:00 - 9:50 AM PDT 

Learn to develop for GPU and non-GPU systems using the latest features in the C++, Fortran, and Python programming languages. Applications written using standard programing languages can take advantage of GPU and multicore-CPU platforms out of the box. Once running on GPUs, NVIDIA-accelerated libraries, CUDA C++, CUDA Fortran, or OpenACC can be used further improve application performance without giving up productivity or portability. Learn about the latest parallel programming capabilities of ISO C++, ISO Fortran, and Python and how to use NVIDIA's HPC software stack to obtain the best possible performance out of your application. See what’s possible and learn best practices in writing parallel code with standard language parallelism.

Performance Results for NCAR Community Models on GPU

Thomas Hauser, Director in NCAR Computational Information Systems Lab, NCAR

Wednesday, September 21 | 10:00 - 10:50 AM PDT

The upcoming NCAR Derecho supercomputer will provide 20% of its sustained computing capability from compute nodes with four NVIDIA A100 graphics processing units, with the remainder coming from dual-socket nodes with AMD EPIC 7763 Milan processors. We'll describe NCAR’s collaborative approach to preparing codes for the accelerated scientific discovery (ASD) program. We'll describe the science objectives for a set of ASD projects and provide performance results for the NCAR community models used in these projects.

Journey Toward Zero-Carbon Emissions Leveraging AI for Scientific Digital Twins 

Dirk Van Essendelft, General Engineer, The National Energy Technology Laboratory, William Epting, Researcher, The National Energy Technology Laboratory, and Tarak Nandi, Data Scientist II, Battelle, The National Energy Technology Laboratory

Wednesday, September 21 | 11:00 - 11:50 AM PDT 

We'll look at the cutting-edge physics-machine learning technology supported by Modulus and Omniverse and how NETL leverages it for their digital twin application to reach their zero-carbon emission target. With their expertise in AI and with Modulus as their AI platform, we'll also hear about their plans to extend their use cases to other problems like carbon sequestration.


Accelerating a New Path to Innovation, Efficiency, and Discovery

Robbie Searles, Solutions Architect, NVIDIA

Monday, September 19 | 6:00 AM - 2:00 PM PDT 

The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs.

Experience C/C++ application acceleration by:

          Accelerating CPU-only applications to run their latent parallelism on GPUs

          Utilizing essential CUDA memory management techniques to optimize accelerated applications

          Exposing accelerated application potential for concurrency and exploiting it with CUDA streams

          Leveraging Nsight Systems to guide and check your work


Basic C/C++ competency including familiarity with variable types, loops, conditional statements, functions, and array manipulations.

An Introduction to cuQuantum

Carlo Nardone, Senior Solutions Architect, NVIDIA

Tuesday, September 20 | 2:00 - 4:00 AM PDT 

Modern quantum computing systems are noisy, remotely-hosted resources that have enabled experimentation but are currently incapable of application-specific quantum advantage. Research and development activities promise to considerably advance this situation, and we are starting to observe quantum-classical systems with less noise and tighter coupling between CPU and quantum processing resources, enabling dynamic circuit execution based on qubit measurement readout. In this emerging era of quantum processing, the community uniquely requires robust circuit simulation technologies for debugging and verification as well as for quantum applications research.

This tutorial will introduce participants to the NVIDIA solution for GPU-accelerated quantum circuit simulation, cuQuantum, embedded in several of the leading quantum circuit simulation frameworks. Specifically, we will present a hands-on tutorial demonstrating performant classical simulation of quantum workflows, highlighting the functionality of cuQuantum’s state vector and tensor network libraries, cuStateVec and cuTensorNet


Familiarity with Python

Familiarity with basic quantum computing concepts

Familiarity with popular pythonic quantum computing frameworks like Cirq, Qiskit, or PennyLane

Accelerated Chemical Fingerprinting and Similarity Search with NVFP and NVSS using NVIDIA GPU

Neel Patel, Technical Marketing Engineer, Clara - Drug Discovery, NVIDIA and Venkatesh Mysore, Principal Solutions Architect, NVIDIA

Wednesday, September 21 | 9:00 - 11:00 AM PDT 

Recent advances in combinatorial, computational, and synthetic chemistry have led to rapidly expanding accessible chemical space - including small molecular compounds suitable for drug discovery. Efficient exploration of such large-size chemical libraries can impact and expedite the process of drug discovery. For this purpose, computational chemists and healthcare researchers can leverage NVIDIA Fingerprinting and Similarity Search - enabled by the highly optimized RAPIDS data science framework.

In this course, you’ll get a short introduction to the accelerated chemical fingerprinting and similarity search, followed by a deep dive into the application of a containerized version of NVFP and NVSS, where the GPU acceleration is achieved using C++ OpenACC/OpenMP built on top of RD-Kit. We will provide a hands-on walkthrough of generating fingerprints and performing similarity search tasks on millions of drug-like small molecules.


Basic familiarity with Python and Docker

Debugging and Analyzing Correctness of CUDA Applications

Steve Ulrich, Software Engineering Manager, NVIDIA and Nicolas Poitoux, Systems Software Engineer, NVIDIA

Wednesday, September 21 | 10:00 AM - 12:00 PM PDT 

Attendees of the lab will gain experience with NVIDIA debugger and correctness tools by using CUDA-GDB and compute-sanitizer's functionalities on multiple small CUDA sample applications. In this workshop, you'll learn the basics of Nsight Visual Studio Code Edition and CUDA-GDB: how to build a project, how to start the debugger, detect and remediate hardware exceptions and inspect and modify GPU memory at runtime. You will also learn how to build an application to run with the compute-sanitizer, how to use memcheck, initcheck and racecheck tools, and how to interpret their reports as well as some of their optional features.


Basic familiarity with CUDA

Basic familiarity with a Linux shell

GPU Acceleration with the C++ Standard Library

Gonzalo Brito, Developer Technology Engineer, NVIDIA

Thursday, September 22 | 7:00 - 9:00 AM PDT 

Harnessing the incredible acceleration of NVIDIA GPUs is easier than ever. For over a decade NVIDIA has been collaborating in the C++ standard language committees on the adoption of features to enable parallel programming without the need for additional extensions or APIs. On account of this work, developers can now write GPU-accelerated C++ code using only standard language features: no language extensions, pragmas, directives, or non-standard libraries.

Standard language parallelism is the simplest, most productive, and most portable approach to accelerated computing. It requires nothing more than ISO standard C++ and allows developers to write applications that are parallel-first such that there is never a need to port them to new platforms or to run them on GPU-accelerators.

In this interactive hands-on workshop we introduce how to write GPU-accelerated applications using only C++ standard language features. By the time you complete this workshop you will be able to:

          Rewrite serial code to instead use C++ standard template library parallel algorithms.

          Use ISO C++ execution policies to indicate when algorithms may be run in parallel on platforms supporting parallelism.

          Use the NVIDIA HPC C++ compiler (NVC++) to compile standard C++ algorithms for execution on NVIDIA GPUs.

          Write C++ applications that are parallel by default so they can run without modification on GPU-accelerated (and many other) platforms.


Familiarity with the C++ programming language

Optimizing CUDA Machine Learning Codes with Nsight Profiling Tools

Felix Schmitt, Senior System Software Engineer, NVIDIA and Sneha Kottapalli, Senior Systems Software Engineer, NVIDIA

Thursday, September 22 | 8:00 - 10:00 AM PDT 

Learn how to use NVIDIA's Nsight tools for analyzing and optimizing CUDA applications. You'll use Nsight Systems to analyze the overall application structure and explore parallelization opportunities for deep learning training. Nsight Compute will be used to analyze and optimize CUDA kernels, using an online machine learning code for 5G.


Basic familiarity with CUDA and GPUs, basic familiarity with executing commands in Linux