At SC19, we announced the latest update to the OpenACC language specification, version 3.0. This includes a number of minor updates in response to user requests, in addition to some clarifications. We'll start here talking about the process we use to develop the language, then talk about what's new in 3.0 and what else we're working on for subsequent versions.
First GPU Hackathon at Sheffield brings together researchers across Ireland and the UK
The University of Sheffield partnered with NVIDIA and Oak Ridge Leadership Computing Facility (OLCF) to hold its first GPU hackathon where seven applications teams worked alongside mentors with GPU-programming expertise to accelerate their scientific codes using GPUs.
GPU Hackathon Participants Tackle Code Optimization for Increased Performance
In today’s competitive research environment, the compute capabilities of scientific applications are critical to the success of many academic research programs. To enable new advances, institutions are turning to specialized teams that help researchers create the most efficient, scalable, and sustainable research codes possible by applying cross-disciplinary computational techniques to new and emerging areas of science.
European teams across scientific disciplines come together to advance their work with GPU acceleration
In November 2018, OpenACC announced the latest update to the specification, version 2.7. This includes a number of minor updates to the previous version 2.6, in response to user requests from their experiences using OpenACC is real applications. There are many changes through the text that are intended to clarify and simplify the specification without changing its meaning. Here we'll go through the nontrivial changes and their impact.
In this blog, we will be discussing the history of the OpenACC GCC implementation, its availability, and enhancements to OpenACC support in GCC. You will also learn about a recent project to assess and improve the performance of codes compiled with GCC’s OpenACC support.
GTC this year was overwhelming in size and dazzled with innovations. If you haven’t experienced the conference yet, we highly recommend you to check it out - the amount of information pouring through talks, tutorials, demos, panels, posters, startup showcases is just hard to imagine. You’ll find all you need for starting or optimizing your Deep Learning or HPC project on GPUs.
Everyone is in a hurry today. And why not? We have CPU’s with more cores than ever and we have GPU’s with over 5,000 cores. The way to use these cores is to expose parallelism in our code. By doing that we can run faster (I don’t believe anyone has ever asked for less performance). One of the best ways to do this is to use OpenACC. OpenACC is great because it is simple to use (just “comments” in your code), allows the user to have control over the parallelization, and is portable.