Why standards-based parallel programming should be in your HPC toolbox
HPC application developers have long relied on programming abstractions that were developed and used almost exclusively in the realm of traditional HPC. OpenMP was created over 25 years ago to simplify shared-memory parallel computing, as programming languages of the time had little or no such functionality and vendors developed their own incompatible abstractions for multiprocessing symmetrical.
CUDA C was designed and released by NVIDIA in 2007 as extensions to the C language to support massively parallel GPU programming, again because the C language lacked the functionality to directly support the parallelism. Both of these programming models have been very successful because they provide the necessary abstractions to overcome the shortcomings of the languages they have extended in a user-friendly way.
However, the landscape has changed a lot in the years since these models were initially released, and it’s time to re-evaluate their place in a programmer’s toolbox. In this article, I explain why you should parallel program natively with ISO C++ and ISO Fortran.
Parallel programming becomes the norm
Parallel programming was once a niche area reserved only for government labs, research universities, and certain forward-looking industries, but today it is a requirement for all industries. For this reason, mainstream programming languages now support parallel programming natively, and an increasing number of development tools support these features. It is now possible to develop applications to support parallelism from the start, without the need for serial reference code.
Such parallel-first codes can be applied to any computer system, whether it’s based on multi-core processors, GPUs, FPGAs, or some other new processor that we haven’t thought of yet, and that it’s supposed to work on day one. This frees developers from the need to port applications to new systems and allows them to focus on productively optimizing their application or expanding its capabilities instead.
NVIDIA offers three programming approaches for our platform, all based on our decades-long investment in libraries and accelerated compilers. All of these approaches are fully composable, giving the programmer the choice of how best to balance their productivity, portability, and performance goals.
ISO languages ensure performance and portability
Development of new applications should be done using ISO standard programming languages and the parallel features they provide. There is no better example of portable programming models than ISO languages. Developers should therefore expect apps written to these standards to work anywhere. Many developers we’ve worked with have found that the performance gains from refactoring their applications using standards-based parallelism in C++ or Fortran are already as good as, or better than, their existing code.
Some developers chose to perform further optimizations by introducing portable directives, OpenACC or OpenMP, to improve data movement or asynchrony and achieve even higher performance. This results in application code that is always fully portable and high performance. Developers who want the best performance in key parts of their applications can choose to go the extra mile by optimizing parts of the application with a lower-level approach, such as CUDA, and take advantage of everything the material has to offer. And, of course, all of these approaches interact well with our expert-designed accelerated libraries.
Expand standards to take advantage of innovations
There is a misconception in the industry that CUDA is the language used by NVIDIA to lock down users, but in fact it is our language for innovating and most directly exposing the functionality of our hardware. CUDA C++ and Fortran are in many ways co-design languages, where we can expose hardware innovations and quickly iterate on the programming model. As best practices are developed in the CUDA programming model, we believe they can and should be codified into standards.
For example, due to our customers’ successes in using mixed-precision arithmetic, we worked with the C++ committee to standardize extended floating-point types in C++23. Thanks in large part to the work of our math libraries team, we have worked with the community to come up with a C++ extension for a standardized linear algebra interface that will fit well not only with our libraries but also with community and proprietary libraries from other suppliers. as well. We strive to improve parallel and asynchronous programming in ISO standard languages because it’s the best thing for our customers and the community at large.
What do the developers think?
Professor Jonas Latt from the University of Geneva uses nvc++ and C++ parallel algorithms in the Pallabos library and said that, “The result produces cutting-edge performance, is highly didactic, and introduces a paradigm shift in cross-platform CPU/GPU programming to the community.”
Dr. Ron Caplan of Predictive Science Inc. said of his experience with nvfortran and Fortran Do Concurrent, “I can now write far fewer directives and still expect high performance from my Fortran applications.”
And Simon McIntosh-Smith from the University of Bristol said when presenting his team’s results using nvc++ and parallel algorithms: “ISO C++ versions of the code were simpler, shorter, easier to write and should be easier to maintain.
These are just a few of the developers who are already reaping the benefits of using standards-based parallelism in their development.
Standards-Based Parallel Programming Resources
NVIDIA offers a range of resources to help you fall in love with standards-based parallelism.
Our HPC Software Development Kit (SDK) is a free software package that includes:
- NVIDIA HPC compilers for C, C++ and Fortran
- The CUDA NVCC compiler
- A comprehensive set of accelerated math libraries, communication libraries, and core libraries for data structures and algorithms
- Debuggers and profilers
The HPC SDK is available for free on x86, Arm, and OpenPOWER platforms, whether or not you have an NVIDIA GPU, and is even Amazon’s HPC software stack for Graviton3.
NVIDIA On-Demand also offers several relevant recordings to get you started (try “No More Porting: Coding for GPUs with Standard C++, Fortran, and Python”), as well as our posts on the NVIDIA Developer Blog.
Finally, I encourage you to register for GTC Fall 2022, where you’ll find even more discussion of our software and hardware offerings, including more information on standards-based parallel programming.
About Jeff Larkin
Jeff is a Principal HPC Application Architect on the HPC Software team at NVIDIA. He is passionate about advancing and adopting parallel programming models for high performance computing. He was previously a member of NVIDIA’s Developer Technology group, specializing in performance analysis and optimization of high-performance computing applications. Jeff is also chair of the OpenACC Technical Committee and has worked in the OpenACC and OpenMP standards bodies. Prior to joining NVIDIA, Jeff worked at the Cray Supercomputing Center of Excellence, located at Oak Ridge National Laboratory.