<!DOCTYPE html>
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Dear all,<br>
<br>
I have QE installed in an HPC which is very fast. I just need to
load it using the following command structure:</p>
<pre>module purge
source /arf/sw/comp/oneapi/2023.0/setvars.sh
module load lib/hdf5/1.14.3-oneapi-2023.0
module load apps/espresso/7.2-oneapi-2023.0
mpirun -np 110 pw.x < input.in > output_arf.out
</pre>
<p>This QE uses HDF5 file format, and is very fast in calculation.
However, when I compiled QE from the source code, the executables
I have produced are order of magnitude slower (also no hdf5 format
files, only dat files). I am trying to compile QE with WEST code
(details: <a class="moz-txt-link-freetext"
href="https://west-code.org/doc/West/latest/installation.html">https://west-code.org/doc/West/latest/installation.html</a>).
This is what I did:</p>
<pre>git<span class="w" style="box-sizing: border-box;"> </span>clone<span
class="w" style="box-sizing: border-box;"> </span>-b<span class="w"
style="box-sizing: border-box;"> </span><span class="s1"
style="box-sizing: border-box;">'qe-7.4'</span><span class="w"
style="box-sizing: border-box;"> </span>--single-branch<span
class="w" style="box-sizing: border-box;"> </span>--depth<span
class="w" style="box-sizing: border-box;"> </span><span class="m"
style="box-sizing: border-box;">1</span><span class="w"
style="box-sizing: border-box;"> </span><a
class="moz-txt-link-freetext" href="https://gitlab.com/QEF/q-e.git">https://gitlab.com/QEF/q-e.git</a><span
class="w" style="box-sizing: border-box;"> </span>QEdir
<span class="nb" style="box-sizing: border-box;">cd</span><span
class="w" style="box-sizing: border-box;"> </span>QEdir
git<span class="w" style="box-sizing: border-box;"> </span>clone<span
class="w" style="box-sizing: border-box;"> </span>-b<span class="w"
style="box-sizing: border-box;"> </span><span class="s1"
style="box-sizing: border-box;">'v6.1.0'</span><span class="w"
style="box-sizing: border-box;"> </span>--single-branch<span
class="w" style="box-sizing: border-box;"> </span>--depth<span
class="w" style="box-sizing: border-box;"> </span><span class="m"
style="box-sizing: border-box;">1</span><span class="w"
style="box-sizing: border-box;"> </span><a
class="moz-txt-link-freetext"
href="https://github.com/west-code-development/West.git">https://github.com/west-code-development/West.git</a><span
class="w" style="box-sizing: border-box;"> </span>West
source /arf/sw/comp/oneapi/2023.0/setvars.sh
module load lib/hdf5/1.14.3-oneapi-2023.0
./configure
make pw
cd West
make<span class="w" style="box-sizing: border-box;"> </span>conf<span
class="w" style="box-sizing: border-box;"> </span><span class="nv"
style="box-sizing: border-box;">PYT</span><span class="o"
style="box-sizing: border-box;">=</span>python3<span class="w"
style="box-sizing: border-box;"> </span><span class="nv"
style="box-sizing: border-box;">PYT_LDFLAGS</span><span class="o"
style="box-sizing: border-box;">=</span><span class="s2"
style="box-sizing: border-box;">"`python3-config --ldflags --embed`"
make all
</span>mpirun -np 110 /arf/home/amuhaymin/west/QEdir/bin/pw.x < input.in > output_mine.out
</pre>
<p>My question is how can I have QE working with the fastest
libraries and with HDF5? I have added the make.inc file for my QE
and for the default QE at the end. These are the modules I have
available:</p>
<pre>[abdul@arf-ui1 ~]$ module avail
---------------------------------------------------------------------------- /usr/share/Modules/modulefiles ----------------------------------------------------------------------------
dot module-git module-info modules null use.own
--------------------------------------------------------------------------------- /arf/sw/modulefiles ----------------------------------------------------------------------------------
apps/abinit/10.2.5-oneapi-2023 apps/R/4.3.0-gcc-11.3.1 lib/greasy/greasy miniconda-intelpython3
apps/ase/3.23.0 apps/R/4.3.2-oneapi-2024.0 lib/gsl/2.7.0 miniconda3
apps/autodock/4.2.6 apps/R/r-env-oneapi-2023 lib/gsl/2.7.1-fftw-3.3.10-oneapi-2024 nvhpc
apps/beast/10.5.0-beta5 apps/regcm/5.0.0-oneapi-2024 lib/hdf5/1.14.3-cxx-oneapi-2024 oneapi
apps/cern_root/v6.30.06 apps/rstudio/2023.12.1-402 lib/hdf5/1.14.3-oneapi-2023.0 R
apps/cfd/cfdtools apps/siesta/4.1.5-oneapi-2023.0 lib/hdf5/1.14.3-oneapi-2024
apps/charmpp/6.9.0 apps/siesta/5.0.1 lib/hdf5/1.14.3-openmpi-5.0.0
apps/cp2k/v2024.3 apps/siesta/5.2.2-oneapi-2024 lib/hdf5/1.14.3-serial-oneapi-2023.0
apps/espresso/6.7-oneapi-2022 apps/su2/8.1.0-openmpi-5.0.4 lib/hdf5/1.14.3-serial-oneapi-2024
apps/espresso/7.2-oneapi-2023.0 apps/truba-ai/cpu-2024.0 lib/hdf5/1.14.5-openmpi-5.0.4
apps/gaussian/g16-avx apps/truba-ai/gpu-2024.0 lib/java/jdk-17
apps/gaussian/g16-avx2 apps/wannier90/3.1.0-oneapi-2023.0 lib/java/jdk-22.0.1
apps/gaussian/g16-legacy apps/wannier90/3.1.0-oneapi-2024.0 lib/libxc/6.2.2-oneapi-2023.0
apps/gaussian/g16-sse4.2 apps/wanniertools/2.7.1-oneapi-2022 lib/libxc/7.0.0
apps/gaussian/gview comp/cmake/3.31.1 lib/netcdf/c-4.8.1-fortran-4.5.4-cxx-4.3.1-openmpi-5.0.0-oneapi-2023
apps/gnuplot/5.4.10 comp/gcc/9.2.0 lib/netcdf/c-4.9.2-fortran-4.6.1-cxx-4.3.1-oneapi-2023.0
apps/gromacs/2023.3 comp/gcc/12.3.0 lib/netcdf/c-4.9.2-fortran-4.6.1-cxx-4.3.1-oneapi-2024
apps/gromacs/2024.1-oneapi2024 comp/gcc/14.1.0 lib/netcdf/c-4.9.2-fortran-4.6.1-cxx-4.3.1-openmpi-5.0.0
apps/julia/1.10.0 comp/nvhpc/nvhpc-23.11 lib/netcdf/c-4.9.2-fortran-4.6.1-cxx-4.3.1-openmpi-5.0.4
apps/lammps/2Aug2023_update2-oneapi-2024 comp/oneapi/2022 lib/openblas/0.3.25
apps/lammps/23Jun2022_update4-oneapi-2022 comp/oneapi/2023 lib/openblas/0.3.29
apps/lammps/29Aug2024_stable_oneapi-2024 comp/oneapi/2024 lib/openmpi/4.0.5
apps/lammps/29Aug2024_update1_oneapi-2024-hamsi comp/python/3.12.0 lib/openmpi/4.1.1
apps/matlab/2024a comp/python/intelpython3 lib/openmpi/4.1.6
apps/namd/2.14-ibverbs comp/python/miniconda3 lib/openmpi/5.0.0
apps/namd/2.14-multicore-CUDA gaussian lib/openmpi/5.0.4
apps/namd/3.0-multicore-avx2 gromacs lib/openmpi/5.0.4-cuda-12.4
apps/namd/3.0-multicore-CUDA lib/aria2/1.37 lib/pnetcdf/1.12.3-oneapi-2023.0
apps/namd/3.0.1-multicore-CUDA lib/cuda/11.8 lib/pnetcdf/1.13.0-oneapi-2024
apps/openfoam/11 lib/cuda/12.4 lib/pnetcdf/1.13.0-openmpi-5.0.0
apps/openfoam/v2312 lib/curl/curl-7.56.1 lib/pnetcdf/1.14.0-openmpi-5.0.4
apps/openfoam/v2312-2025 lib/fftw/3.3.10-impi-oneapi-2024 matlab
Key:
module-alias modulepath
</pre>
<p>Lastly, here is make.inc of the default QE:</p>
<pre># make.inc. Generated from make.inc.in by configure.
# compilation rules
.SUFFIXES :
.SUFFIXES : .o .c .f90 .h .fh
# most fortran compilers can directly preprocess c-like directives: use
# $(MPIF90) $(F90FLAGS) -c $<
# if explicit preprocessing by the C preprocessor is needed, use:
# $(CPP) $(CPPFLAGS) $< -o $*.F90
# $(MPIF90) $(F90FLAGS) -c $*.F90 -o $*.o
# remember the tabulator in the first column !!!
.f90.o:
$(MPIF90) $(F90FLAGS) -c $<
.c.o:
$(CC) $(CFLAGS) -c $<
.h.fh:
$(CPP) $(CPPFLAGS) $< -o $*.fh
# Top QE directory, useful for locating libraries, linking QE with plugins
# The following syntax should always point to TOPDIR:
TOPDIR = $(dir $(abspath $(filter %make.inc,$(MAKEFILE_LIST))))
# if it doesn't work, uncomment the following line (edit if needed):
# TOPDIR = /arf/sw/src/applications/espresso/qe-7.2
# DFLAGS = precompilation options (possible arguments to -D and -U)
# used by the C compiler and preprocessor
# To use libxc (v>=4.3.0), add -D__LIBXC to DFLAGS
# See include/defs.h.README for a list of options and their meaning
# With the exception of IBM xlf, FDFLAGS = $(DFLAGS)
# For IBM xlf, FDFLAGS is the same as DFLAGS with separating commas
# MANUAL_DFLAGS = additional precompilation option(s), if desired
# BEWARE: it does not work for IBM xlf! Manually edit FDFLAGS
MANUAL_DFLAGS =
DFLAGS = -D__DFTI -D__MPI -D__SCALAPACK -D__HDF5
FDFLAGS = $(DFLAGS) $(MANUAL_DFLAGS)
# IFLAGS = how to locate directories with *.h or *.f90 file to be included
# typically -I$(TOPDIR)/include -I/some/other/directory/
# the latter contains .e.g. files needed by FFT libraries
# for libxc add -I/path/to/libxc/include/
IFLAGS = -I. -I$(TOPDIR)/include -I/arf/sw/comp/oneapi/2023.0/mkl/2023.0.0/include -I/arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/include
# MOD_FLAG = flag used by f90 compiler to locate modules
MOD_FLAG = -I
# BASEMOD_FLAGS points to directories containing basic modules,
# while BASEMODS points to the corresponding module libraries.
# More package-specific directories can be added in each Makefile
# and stored into MODFLAGS, and the same for module libraries in QEMODS
BASEMOD_FLAGS= $(MOD_FLAG)$(TOPDIR)/upflib \
$(MOD_FLAG)$(TOPDIR)/XClib \
$(MOD_FLAG)$(TOPDIR)/Modules \
$(MOD_FLAG)$(TOPDIR)/FFTXlib/src \
$(MOD_FLAG)$(TOPDIR)/LAXlib \
$(MOD_FLAG)$(TOPDIR)/UtilXlib \
$(MOD_FLAG)$(TOPDIR)/MBD \
$(MOD_FLAG)$(TOPDIR)/KS_Solvers $(FOX_MOD)
# If A depends upon B, A should come before B in the list below
# (most compilers don't care but some don't resolve cross links)
BASEMODS= $(TOPDIR)/Modules/libqemod.a \
$(TOPDIR)/upflib/libupf.a \
$(TOPDIR)/XClib/xc_lib.a \
$(TOPDIR)/FFTXlib/src/libqefft.a \
$(TOPDIR)/LAXlib/libqela.a \
$(TOPDIR)/UtilXlib/libutil.a \
$(TOPDIR)/MBD/libmbd.a
# Compilers: fortran-90, fortran-77, C
# If a parallel compilation is desired, MPIF90 should be a fortran-90
# compiler that produces executables for parallel execution using MPI
# (such as for instance mpif90, mpf90, mpxlf90,...);
# otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)
MPIF90 = mpiifort
F90 = ifort
CC = mpiicc
# GPU architecture (Kepler: 35, Pascal: 60, Volta: 70 )
GPU_ARCH=
# CUDA runtime (Pascal: 8.0, Volta: 9.0)
CUDA_RUNTIME=
# CUDA F90 Flags
CUDA_F90FLAGS= $(MOD_FLAG)$(TOPDIR)/external/devxlib/src
# CUDA C Flags
CUDA_CFLAGS=
# C preprocessor and preprocessing flags - for explicit preprocessing,
# if needed (see the compilation rules above)
# preprocessing flags must include DFLAGS and IFLAGS
CPP = cpp
CPPFLAGS = -I/arf/sw/lib/hdf5/zstd-1.5.5-oneapi-2023.0/usr/local/include -I/arf/sw/lib/hdf5/szip-2.1.1-oneapi-2023.0/include -I/arf/sw/lib/hdf5/zlib-1.3-oneapi-2023.0/include -I/arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/include $(DFLAGS) $(IFLAGS)
# compiler flags: C, F90
# C flags must include DFLAGS and IFLAGS
# F90 flags must include MODFLAGS, IFLAGS, and FDFLAGS with appropriate syntax
CFLAGS = -xHost $(DFLAGS) $(IFLAGS) $(CUDA_CFLAGS)
F90FLAGS = $(FFLAGS) -nomodule -fpp -allow nofpp_comments $(FDFLAGS) $(CUDA_F90FLAGS) $(IFLAGS) $(MODFLAGS)
# compiler flags with and without optimization for fortran-77
# the latter is NEEDED to properly compile dlamch.f, used by lapack
FFLAGS = -O2 -assume byterecl -g -traceback
FFLAGS_NOOPT = -O0 -assume byterecl -g -traceback
# compiler flag needed by some compilers when the main program is not fortran
# Currently used for Yambo
FFLAGS_NOMAIN = -nofor_main
# Linker, linker-specific flags (if any)
# Typically LD coincides with F90 or MPIF90, LD_LIBS is empty
# for libxc, set LD_LIBS=-L/path/to/libxc/lib/ -lxcf03 -lxc
# If libxc release is 5.0.0 replace -lxcf03 with -lxcf90
LD = mpiifort
LDFLAGS = -L/arf/sw/lib/hdf5/zstd-1.5.5-oneapi-2023.0/usr/local/<a
class="moz-txt-link-freetext"
href="lib:-L/arf/sw/lib/hdf5/szip-2.1.1-oneapi-2023.0/lib">lib:-L/arf/sw/lib/hdf5/szip-2.1.1-oneapi-2023.0/lib</a> -L/arf/sw/lib/hdf5/zlib-1.3-oneapi-2023.0/lib -L/arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/lib
LD_LIBS =
# External Libraries (if any) : blas, lapack, fft, MPI
# If you have nothing better, use the local copy
# BLAS_LIBS = $(TOPDIR)/external/lapack/libblas.a
BLAS_LIBS = -lmkl_intel_lp64 -lmkl_sequential -lmkl_core
# If you have nothing better, use the local copy
# LAPACK = liblapack
# LAPACK_LIBS = $(TOPDIR)/external/lapack/liblapack.a
LAPACK =
LAPACK_LIBS =
SCALAPACK_LIBS = -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64
# nothing needed here if the the internal copy of FFTW is compiled
# (needs -D__FFTW in DFLAGS)
FFT_LIBS =
# HDF5
HDF5_LIBS = -L/arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/lib /arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/lib/libhdf5hl_fortran.a /arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/lib/libhdf5_hl.a /arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/lib/libhdf5_fortran.a /arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/lib/libhdf5.a -L/arf/sw/lib/hdf5/zlib-1.3-oneapi-2023.0/lib -L/arf/sw/lib/hdf5/szip-2.1.1-oneapi-2023.0/lib -lsz -lz -ldl -lm -Wl,-rpath -Wl,/arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/lib
# FOX
FOX =
FOX_MOD =
FOX_LIB =
FOX_FLAGS =
# ENVIRON
ENVIRON_LIBS =
# MPI libraries (should not be needed)
MPI_LIBS =
# IBM-specific: MASS libraries, if available and if -D__MASS is defined in FDFLAGS
MASS_LIBS =
# CUDA libraries
CUDA_LIBS= -L$(TOPDIR)/external/devxlib/src -ldevXlib
CUDA_EXTLIBS = devxlib
# ar command and flags - for most architectures: AR = ar, ARFLAGS = ruv
AR = ar
ARFLAGS = ruv
# ranlib command. If ranlib is not needed (it isn't in most cases) use
# RANLIB = echo
RANLIB = ranlib
# all internal and external libraries - do not modify
FLIB_TARGETS = all
LIBXC_LIBS =
QELIBS = $(LIBXC_LIBS) \
$(CUDA_LIBS) $(SCALAPACK_LIBS) $(LAPACK_LIBS) $(FOX_LIB) \
$(FFT_LIBS) $(BLAS_LIBS) $(MPI_LIBS) $(MASS_LIBS) $(HDF5_LIBS) \
$(ENVIRON_LIBS) $(LD_LIBS)
# wget or curl - useful to download from network
WGET = wget -O
</pre>
<p>And here is the make.inc for my compilation:
----------------------------------------------------------------------------------------------------------------------------------------------------</p>
<pre>[amuhaymin@arf-ui1 QEdir]$ cat make.inc
# make.inc. Generated from make.inc.in by configure.
# compilation rules
.SUFFIXES :
.SUFFIXES : .o .c .f90 .h .fh
# most fortran compilers can directly preprocess c-like directives: use
# $(MPIF90) $(F90FLAGS) -c $<
# if explicit preprocessing by the C preprocessor is needed, use:
# $(CPP) $(CPPFLAGS) $< -o $*.F90
# $(MPIF90) $(F90FLAGS) -c $*.F90 -o $*.o
# remember the tabulator in the first column !!!
.f90.o:
$(MPIF90) $(F90FLAGS) -c $< -o $(@)
.c.o:
$(CC) $(CFLAGS) -c $<
.h.fh:
$(CPP) $(CPPFLAGS) $< -o $*.fh
# Top QE directory, useful for locating libraries, linking QE with plugins
# The following syntax should always point to TOPDIR:
TOPDIR = $(dir $(abspath $(filter %make.inc,$(MAKEFILE_LIST))))
# if it doesn't work, uncomment the following line (edit if needed):
# TOPDIR = /arf/home/amuhaymin/west/QEdir
# DFLAGS = precompilation options (possible arguments to -D and -U)
# used by the C compiler and preprocessor
# To use libxc (v>=4.3.0), add -D__LIBXC to DFLAGS
# See include/defs.h.README for a list of options and their meaning
# With the exception of IBM xlf, FDFLAGS = $(DFLAGS)
# For IBM xlf, FDFLAGS is the same as DFLAGS with separating commas
# MANUAL_DFLAGS = additional precompilation option(s), if desired
# BEWARE: it does not work for IBM xlf! Manually edit FDFLAGS
MANUAL_DFLAGS =
DFLAGS = -D__DFTI -D__MPI -D__MPI_MODULE
FDFLAGS = $(DFLAGS) $(MANUAL_DFLAGS)
# IFLAGS = how to locate directories with *.h or *.f90 file to be included
# typically -I$(TOPDIR)/include -I/some/other/directory/
# the latter contains .e.g. files needed by FFT libraries
# for libxc add -I/path/to/libxc/include/
IFLAGS = -I. -I$(TOPDIR)/include -I/arf/sw/comp/oneapi/2023.0/mkl/2023.0.0/include
# MOD_FLAG = flag used by f90 compiler to locate modules
MOD_FLAG = -I
# BASEMOD_FLAGS points to directories containing basic modules,
# while BASEMODS points to the corresponding module libraries.
# More package-specific directories can be added in each Makefile
# and stored into MODFLAGS, and the same for module libraries in QEMODS
BASEMOD_FLAGS= $(MOD_FLAG)$(TOPDIR)/upflib \
$(MOD_FLAG)$(TOPDIR)/XClib \
$(MOD_FLAG)$(TOPDIR)/Modules \
$(MOD_FLAG)$(TOPDIR)/FFTXlib/src \
$(MOD_FLAG)$(TOPDIR)/LAXlib \
$(MOD_FLAG)$(TOPDIR)/UtilXlib \
$(MOD_FLAG)$(TOPDIR)/MBD \
$(MOD_FLAG)$(TOPDIR)/KS_Solvers $(FOX_MOD)
# If A depends upon B, A should come before B in the list below
# (most compilers don't care but some don't resolve cross links)
BASEMODS= $(TOPDIR)/Modules/libqemod.a \
$(TOPDIR)/upflib/libupf.a \
$(TOPDIR)/XClib/xc_lib.a \
$(TOPDIR)/FFTXlib/src/libqefft.a \
$(TOPDIR)/LAXlib/libqela.a \
$(TOPDIR)/UtilXlib/libutil.a \
$(TOPDIR)/MBD/libmbd.a
# Compilers: fortran-90, fortran-77, C
# If a parallel compilation is desired, MPIF90 should be a fortran-90
# compiler that produces executables for parallel execution using MPI
# (such as for instance mpif90, mpf90, mpxlf90,...);
# otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)
MPIF90 = mpiifort
F90 = ifort
CC = icc
# GPU architecture (Kepler: 35, Pascal: 60, Volta: 70, Turing: 75, Ampere: 80)
GPU_ARCH=
# CUDA runtime (should be compatible with the CUDA driver)
CUDA_RUNTIME=
# CUDA F90 Flags
CUDA_F90FLAGS= $(MOD_FLAG)$(TOPDIR)/external/devxlib/src
# CUDA C Flags
CUDA_CFLAGS=
# C preprocessor and preprocessing flags - for explicit preprocessing,
# if needed (see the compilation rules above)
# preprocessing flags must include DFLAGS and IFLAGS
CPP = cpp
CPPFLAGS = -I/arf/sw/lib/hdf5/zstd-1.5.5-oneapi-2023.0/usr/local/include -I/arf/sw/lib/hdf5/szip-2.1.1-oneapi-2023.0/include -I/arf/sw/lib/hdf5/zlib-1.3-oneapi-2023.0/include -I/arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/include $(DFLAGS) $(IFLAGS)
# compiler flags: C, F90
# C flags must include DFLAGS and IFLAGS
# F90 flags must include MODFLAGS, IFLAGS, and FDFLAGS with appropriate syntax
CFLAGS = -O3 $(DFLAGS) $(IFLAGS) $(CUDA_CFLAGS)
F90FLAGS = $(FFLAGS) -nomodule -fpp -allow nofpp_comments $(FDFLAGS) $(CUDA_F90FLAGS) $(IFLAGS) $(MODFLAGS)
# compiler flags with and without optimization for fortran-77
# the latter is NEEDED to properly compile dlamch.f, used by lapack
FFLAGS = -O2 -assume byterecl -g -traceback
FFLAGS_NOOPT = -O0 -assume byterecl -g -traceback
# compiler flag needed by some compilers when the main program is not fortran
# Currently used for Yambo
FFLAGS_NOMAIN = -nofor_main
# Linker, linker-specific flags (if any)
# Typically LD coincides with F90 or MPIF90, LD_LIBS is empty
# for libxc, set LD_LIBS=-L/path/to/libxc/lib/ -lxcf03 -lxc
# If libxc release is 5.0.0 replace -lxcf03 with -lxcf90
LD = mpiifort
LDFLAGS = -L/arf/sw/lib/hdf5/zstd-1.5.5-oneapi-2023.0/usr/local/lib -L/arf/sw/lib/hdf5/szip-2.1.1-oneapi-2023.0/lib -L/arf/sw/lib/hdf5/zlib-1.3-oneapi-2023.0/lib -L/arf/sw/lib/hdf5/1.14.3-oneapi-2023.0/lib
LD_LIBS =
# External Libraries (if any) : blas, lapack, fft, MPI
# If you have nothing better, use the local copy
# BLAS_LIBS = $(TOPDIR)/external/lapack/libblas.a
BLAS_LIBS = -lmkl_intel_lp64 -lmkl_sequential -lmkl_core
# If you have nothing better, use the local copy
# LAPACK = liblapack
# LAPACK_LIBS = $(TOPDIR)/external/lapack/liblapack.a
LAPACK =
LAPACK_LIBS =
SCALAPACK_LIBS =
# nothing needed here if the the internal copy of FFTW is compiled
# (needs -D__FFTW in DFLAGS)
FFT_LIBS =
# HDF5
HDF5_LIBS =
# FOX
FOX =
FOX_MOD =
FOX_LIB =
FOX_FLAGS =
# ENVIRON
ENVIRON_LIBS =
# MPI libraries (should not be needed)
MPI_LIBS =
# IBM-specific: MASS libraries, if available and if -D__MASS is defined in FDFLAGS
MASS_LIBS =
# CUDA libraries
CUDA_LIBS= -L$(TOPDIR)/external/devxlib/src -ldevXlib
CUDA_EXTLIBS = devxlib
# ar command and flags - for most architectures: AR = ar, ARFLAGS = ruv
AR = ar
ARFLAGS = ruv
# ranlib command. If ranlib is not needed (it isn't in most cases) use
# RANLIB = echo
RANLIB = ranlib
# all internal and external libraries - do not modify
FLIB_TARGETS = all
LIBXC_LIBS =
QELIBS = $(LIBXC_LIBS) \
$(CUDA_LIBS) $(SCALAPACK_LIBS) $(LAPACK_LIBS) $(FOX_LIB) \
$(FFT_LIBS) $(BLAS_LIBS) $(MPI_LIBS) $(MASS_LIBS) $(HDF5_LIBS) \
$(ENVIRON_LIBS) $(LD_LIBS)
# wget or curl - useful to download from network
WGET = wget -O
# Install directory - "make install" copies *.x executables there
PREFIX = /usr/local
</pre>
Sincerely, <br>
Abdul Muhaymin<br>
Graduate student, Institute of Material Science and Nanotechnology,<br>
Bilkent University, Ankara
<p></p>
</body>
</html>