[QE-developers] Best Practices for GPU HPC Implementation (Spack, NCG Singularity, etc.)

Hironori Kondo hirokondo at college.harvard.edu
Fri Oct 11 02:48:52 CEST 2024


Hello Pietro,

This is very helpful. Thank you for the prompt response. I have consulted
with the maintainers of Spack QE as well, and they have noted that the
nvhpc module is required (as expected), despite not being listed as a
dependency. Conversely, it seems that the cuda module that *is* listed is
scarcely applicable. I will be proceeding with this route.

I appreciate your help!

Best,
Hiro

*Hironori Kondo*
Harvard College | Class of 2025
A.B. Candidate in Applied Mathematics
Concurrent A.M. in Chemistry
hirokondo at college.harvard.edu | hkondo at mit.edu

On Thu, Oct 10, 2024 at 3:56 AM Pietro Davide Delugas <pdelugas at sissa.it>
wrote:

> Dear Hironori
>
> If you have Nvidia GPUs, the efficient and reliable way to use QE on them
> is to compile the nvhpc package.
> In that toolchain, most of the suite is accelerated and well-tested.
> About the toolchain for compiling QE is rather short
> you need an nvhpc suite (versions  23.4 or 23.11 are  recommended)
> any FFTW3 library with nvfortran support
> any BLAS/LAPACK library  with nvfortran support
> e.g. you could use directly MKL with nvfortran support that includes both,
> but again any working version of those libraries would be ok
>
> OpenMPI library usually comes with the NVHPC package.
>
> with these components, you can compile QE  either using GNU autoconf (see
> here) <https://gitlab.com/QEF/q-e/-/wikis/Developers/Make-build-system> or
> CMake (see here
> <https://gitlab.com/QEF/q-e/-/wikis/Developers/CMake-build-system> )
>
> Indeed using spack is a straightforward way to include and verify the
> whole toolchain directly and then compile with CMake build system; it is
> probably better to have nvhpc already installed in the system and configure
> spack for using it.
>
> ASE uses QE as an IO calculator, so the two toolchains should be
> independent, no need to integrate them in conda or any kind of environment.
>
> Hope it helps
>
> best regards
> Pietro
>
> ------------------------------
> *From:* developers <developers-bounces at lists.quantum-espresso.org> on
> behalf of Hironori Kondo <hirokondo at college.harvard.edu>
> *Sent:* Wednesday, October 9, 2024 17:01
> *To:* developers at lists.quantum-espresso.org <
> developers at lists.quantum-espresso.org>
> *Subject:* [QE-developers] Best Practices for GPU HPC Implementation
> (Spack, NCG Singularity, etc.)
>
> Hello QE dev team,
>
> I hope this email finds you well. I am working on implementing QE in an
> HPC environment, with GPU functionality. I know there are many questions of
> this variety in the archives, but I think my question is more specific and
> has not been discussed thus far. My university HPC support team recommended
> that I contact the QE primary authors for the best guidance.
>
> From what I've seen, most people are installing QE with CUDA using the
> standard GNU toolchain-style process with HPC SDK. My university is not too
> keen on this, and recommends something tidier, if possible---Singularity,
> Mamba/Conda, Spack, etc.
>
> On the Singularity front, the latest image offered by Nvidia NCG is 7.1. I
> suspect this is due to the transition to OpenAcc, but that's just a guess.
> I would like to stick with 7.3.1, especially with improvements in EPW.
>
> On the Conda front, there doesn't seem to be a GPU option by default. I
> could try building within my Mamba environment with the standard process,
> but that seems less neat.
>
> On the Spack front, I see that there is QE 7.3.1, and there is a CUDA
> variant: https://packages.spack.io/package.html?name=quantum-espresso#
> However, it relies on the CUDA package (linked within the Spack page
> above), not the separate HPC SDK package:
> https://packages.spack.io/package.html?name=nvhpc
> I'm not sure what implications this has.
>
>
> I would be much obliged for your guidance on the best approach here. If it
> is relevant, I am managing my HPC workflow in quacc by Princeton's Andrew
> Rosen, which interfaces with QE via ASE.
>
> Best,
> Hiro
>
> *Hironori Kondo*
> Harvard College | Class of 2025
> A.B. Candidate in Applied Mathematics
> Concurrent A.M. in Chemistry
> hirokondo at college.harvard.edu | hkondo at mit.edu
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/developers/attachments/20241010/aabf1f19/attachment.html>


More information about the developers mailing list