OpenACC.org will be represented at SC18 by its members, supporters, and users. Please join us at the conference to learn more about OpenACC, share your research and feedback, be a part of our growing user community and organization.
OpenACC Demo Station @ Indiana University Booth
Come by OpenACC.org station at the Indiana University booth #2400. Get in contact with the makers of the specification, compiler implementers, and researchers using OpenACC.
User Group MeetingOpenACC User Group Meeting
Tuesday November 13, 7-10 PM, Dallas, Texas, USA.
OpenACC users will gather to discuss OpenACC-related research, upcoming events, classes, available resources and to brainstorm what can be done in the future to help the community grow. Seating is limited. Please register in advance if you'd like to attend.
WorkshopFifth Workshop on Accelerator Programming Using Directives (WACCPD), NOV 11, 2018
Sunday November 11, 9AM - 12:30PM, Dallas, Texas, USA
The focus of this workshop is to explore this ‘X’ component in a hybrid MPI +X programming approach. We are looking forward to technical papers discussing innovative high-level language features and their (early prototype) implementations needed to address hierarchical heterogeneous systems, stories and lessons learnt while using directives to migrate scientific legacy code to parallel processors, state-of-the-art compilation and runtime scheduling techniques, techniques to optimize performance, mechanisms to keep communication and synchronization efficient. For more details visit the workshop page.
BoFOpenACC API User Experience, Vendor Reaction, Relevance, and Roadmap
Tuesday, November 13, 5:15 - 6:45PM, D171
OpenACC, a directive-based high-level parallel programming model, has gained rapid momentum among scientific application users - the key drivers of the specification. The user-friendly programming model has facilitated acceleration of over 130 applications including CAM, ANSYS Fluent, Gaussian, VASP, Synopsys on multiple platforms and is also seen as an entry-level programming model for the top supercomputers (Top500 list) such as Summit, Sunway Taihulight, and Piz Daint. As in previous years, this BoF invites scientists, programmers, and researchers to discuss their experiences in adopting OpenACC for scientific applications, learn about the roadmaps from implementers and the latest developments in the specification.
TalksSpeeding up Programs with OpenACC in GCC
Tuesday, November 13, 10:30AM, Booth #2400
Proven in production use for decades, GCC (the GNU Compiler Collection) offers C, C++, Fortran, and other compilers for a multitude of target systems. Over the last few years, we -- formerly known as "CodeSourcery", now a group in "Mentor, a Siemens Business" – added support for the directive-based OpenACC programming model. Requiring only few changes to your existing source code, OpenACC allows for easy parallelization and code offloading to GPUs. We will present a short introduction of GCC and OpenACC, implementation status, examples and performance results.OpenACC: Your Automatic Transmission for Heterogeneous Supercomputing
Tuesday, November 13, 11:00 - 11:25AM, Booth #2417
Come learn why the authors of VASP, Fluent, Gaussian, Synopsys and numerous other science and engineering applications are using OpenACC. OpenACC supports and promotes scalable parallel programming on both multicore CPUs and GPU-accelerated systems, enabling large production applications to port effectively to the newest generation of supercomputers. It has very well-supported interoperability with CUDA C++, CUDA Fortran, MPI and OpenMP, allowing you to optimize each aspect of your application with the appropriate tools. OpenACC has proven to be the ideal on-ramp to parallel and GPU computing, even for those who need to tune their most important kernels using libraries or CUDA. Come see how you can try OpenACC with the free PGI Community Edition compiler suite.Latest Results from the Summit Supercomputer
Tuesday, November 13, 12:00 - 12:25PM, Booth #2417
This presentation will communicate selected, early results from application readiness activities at the Oak Ridge Leadership Computing Facility (OLCF), in preparation for Summit, the Department of Energy, Office of Science new supercomputer operated by Oak Ridge National Laboratory. With over 9,000 POWER9 CPUs and 27,000 V100 GPUs, high-bandwidth data movement, and large node-local memory, Summit’s architecture is proving to be effective in advancing performance across diverse applications in traditional modeling and simulation, high-performance data analytics, and artificial intelligence. These advancements in application performance are being achieved with small increases in Summit's electricity consumption as compared with previous supercomputers operated at OLCF.3P to Science using OpenACC: Performance, Productivity, and Portability
Tuesday, November 13, 04:00 - 04:25PM, Booth #2417
Architectures are becoming increasingly heterogeneous offering developers a rich variety of computing resources. While these architectures benefit from customized optimization strategies, the scientific developers tend to prefer a 'write-once' code in order to create a code that is portable, yet performance-efficient and migratable to rapidly changing hardware. This talk will present stories of porting scientific applications using OpenACC to state-of-the-art heterogeneous computing systems. Applications will range from molecular dynamics, nuclear physics, neutrino experiments, and climate domains.OpenACC programming lessons learned from refactoring a meteorological model for CPU/GPU portability
Wednesday, November 14, 10:30AM, Booth #2400
Quite often, good GPU performance results are shown without taking you behind the scene to see how they were obtained. Was the code twisted into a pretzel to make it run fast? What problems were encountered and how were they dealt with? In this talk we will discuss some problems encountered, solutions found, and lessons learned while using OpenACC to refactor NCAR’s flagship global meteorological model, the Model for Prediction Across Scales (MPAS), for GPUs.