From hubertus.vandam at uni-due.de Tue Feb 4 16:40:17 2025 From: hubertus.vandam at uni-due.de (van Dam, Dr. Hubertus) Date: Tue, 4 Feb 2025 15:40:17 +0000 Subject: [QE-users] Installing QE for GPUs: LAXlib says cuSOLVER does not have cusolverDnZhegvdx Message-ID: Hi, I am trying to compile Quantum ESPRESSO 7.4 for GPUs. I am using NVHPC 23.9 with CUDA 12.2. I am using CMake 3.29.6 to configure the code. The machine is running AlmaLinux. Most pieces seem straightforward but when CMake gets to LAXlib it messes up: -- Looking for cusolverDnZhegvdx -- Looking for cusolverDnZhegvdx - not found CMake Error at LAXlib/CMakeLists.txt:32 (message): The version of CUDAToolkit chosen by the PGI/NVHPC compiler internally doesn't contain cusolverDnZhegvdx. cuSOLVER features used by LAXLib are only supported since CUDAToolkit 10.1 release. Use a newer compiler or select a newer CUDAToolkit internal to the PGI/NVHPC compiler. The CUDAToolkit I am using is clearly newer than 10.1. I have checked that the cusolver library exists and contains cusolverDnZhegvdx. Nevertheless LAXlib does not seem to be able to find it. Note that LAXlib/CMakeLists.txt still uses check_function_exists which has been deprecated (although it should still work in this case). The CMake command is: cmake -DCMAKE_INSTALL_PREFIX=$INSTALL_DIR \ -DCMAKE_PREFIX_PATH=$SCRATCH_DIR \ -DCMAKE_BUILD_TYPE=RELWITHDEBINFO \ -DCMAKE_C_COMPILER=$G_CC \ -DCMAKE_CXX_COMPILER=$G_CXX \ -DCMAKE_Fortran_COMPILER=$G_FC \ -DCMAKE_Fortran_COMPILER_ID=NVHPC \ -DCMAKE_Fortran_COMPILER_VERSION=23.9 \ -DNVFORTRAN_CUDA_VERSION=12.2 \ -DOpenACC_C_FLAGS="-acc=gpu" \ -DMPI_C_COMPILER=$M_CC \ -DMPI_CXX_COMPILER=$M_CXX \ -DMPI_Fortran_COMPILER=$M_FC \ -DMPIEXEC_EXECUTABLE=$M_EXE \ -DQE_ENABLE_PLUGINS="gipaw" \ -DQE_ENABLE_LIBXC=ON \ -DLIBXC_ROOT=$SCRATCH_DIR/libxc-6.1.0 \ -DQE_ENABLE_HDF5=OFF \ -DQE_ENABLE_FOX=ON \ -DQE_ENABLE_CUDA=ON \ -DQE_FFTW_VENDOR=FFTW3 \ -DFFTW3_LIBRARIES="/x86_64-linux/lib/libcufftw.so:/x86_64-linux/lib/libcufft.so" \ -DFFTW3_INCLUDE_DIRS=/include/cufftw.h \ -DBLA_VENDOR=NVHPC \ -H. -Bbuild In addition: G_CC = nvcc G_CXX = nvc++ G_FC = nvfortran M_CC = mpicc M_CXX = mpicxx M_FC = mpif90 M_EXE = mpiexec Where the MPI installation is OpenMPI 5.0.3. Does anyone have any insights into how to get around this issue? Thanks in advance, Hubertus Hubertus van Dam (er/he, er/him, sein/his) Universit?t Duisburg-Essen Zentrum f?r Informations- und Mediendienste (ZIM) Raum SH 209 HPC Consultant hubertus.vandam at uni-due.de www.linkedin.com/in/HuubVanDam orcid.org/0000-0002-0876-3294 [image001.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10450 bytes Desc: image001.png URL: From pdelugas at sissa.it Tue Feb 4 17:50:44 2025 From: pdelugas at sissa.it (Pietro Davide Delugas) Date: Tue, 4 Feb 2025 16:50:44 +0000 Subject: [QE-users] Installing QE for GPUs: LAXlib says cuSOLVER does not have cusolverDnZhegvdx In-Reply-To: References: Message-ID: Hello The C compiler has to be nvc , not nvcc. this should solve the issue with the cusolver. Pietro ??????? ________________________________ From: users on behalf of van Dam, Dr. Hubertus Sent: Tuesday, February 4, 2025 16:40 To: users at lists.quantum-espresso.org Subject: [QE-users] Installing QE for GPUs: LAXlib says cuSOLVER does not have cusolverDnZhegvdx Hi, I am trying to compile Quantum ESPRESSO 7.4 for GPUs. I am using NVHPC 23.9 with CUDA 12.2. I am using CMake 3.29.6 to configure the code. The machine is running AlmaLinux. Most pieces seem straightforward but when CMake gets to LAXlib it messes up: -- Looking for cusolverDnZhegvdx -- Looking for cusolverDnZhegvdx - not found CMake Error at LAXlib/CMakeLists.txt:32 (message): The version of CUDAToolkit chosen by the PGI/NVHPC compiler internally doesn't contain cusolverDnZhegvdx. cuSOLVER features used by LAXLib are only supported since CUDAToolkit 10.1 release. Use a newer compiler or select a newer CUDAToolkit internal to the PGI/NVHPC compiler. The CUDAToolkit I am using is clearly newer than 10.1. I have checked that the cusolver library exists and contains cusolverDnZhegvdx. Nevertheless LAXlib does not seem to be able to find it. Note that LAXlib/CMakeLists.txt still uses check_function_exists which has been deprecated (although it should still work in this case). The CMake command is: cmake -DCMAKE_INSTALL_PREFIX=$INSTALL_DIR \ -DCMAKE_PREFIX_PATH=$SCRATCH_DIR \ -DCMAKE_BUILD_TYPE=RELWITHDEBINFO \ -DCMAKE_C_COMPILER=$G_CC \ -DCMAKE_CXX_COMPILER=$G_CXX \ -DCMAKE_Fortran_COMPILER=$G_FC \ -DCMAKE_Fortran_COMPILER_ID=NVHPC \ -DCMAKE_Fortran_COMPILER_VERSION=23.9 \ -DNVFORTRAN_CUDA_VERSION=12.2 \ -DOpenACC_C_FLAGS="-acc=gpu" \ -DMPI_C_COMPILER=$M_CC \ -DMPI_CXX_COMPILER=$M_CXX \ -DMPI_Fortran_COMPILER=$M_FC \ -DMPIEXEC_EXECUTABLE=$M_EXE \ -DQE_ENABLE_PLUGINS="gipaw" \ -DQE_ENABLE_LIBXC=ON \ -DLIBXC_ROOT=$SCRATCH_DIR/libxc-6.1.0 \ -DQE_ENABLE_HDF5=OFF \ -DQE_ENABLE_FOX=ON \ -DQE_ENABLE_CUDA=ON \ -DQE_FFTW_VENDOR=FFTW3 \ -DFFTW3_LIBRARIES="/x86_64-linux/lib/libcufftw.so:/x86_64-linux/lib/libcufft.so" \ -DFFTW3_INCLUDE_DIRS=/include/cufftw.h \ -DBLA_VENDOR=NVHPC \ -H. -Bbuild In addition: G_CC = nvcc G_CXX = nvc++ G_FC = nvfortran M_CC = mpicc M_CXX = mpicxx M_FC = mpif90 M_EXE = mpiexec Where the MPI installation is OpenMPI 5.0.3. Does anyone have any insights into how to get around this issue? Thanks in advance, Hubertus Hubertus van Dam (er/he, er/him, sein/his) Universit?t Duisburg-Essen Zentrum f?r Informations- und Mediendienste (ZIM) Raum SH 209 HPC Consultant hubertus.vandam at uni-due.de www.linkedin.com/in/HuubVanDam orcid.org/0000-0002-0876-3294 [image001.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10450 bytes Desc: image001.png URL: