[Pw_forum] Issue while executing QE-5.0 GPU

Filippo Spiga spiga.filippo at gmail.com
Tue Oct 28 09:24:39 CET 2014


Dear Nisha,

I am 99% sure the configure _never_ place something like "-ilp64" in LDFLAGS or "-DMKL_ILP64" in CFLAGS. Why those? Please run the configure without _any_ modifications in the environment variable and then run make without change it. Be sure Intel compilers and MKL are available in PATH and LD_LIBRARY_FLAGS otherwise if obvious it will not work.

If fails the configure or make then we can look about what is wrong.

F

> On Oct 28, 2014, at 5:57 AM, Nisha Agrawal <itlinkstonisha at gmail.com> wrote:
> 
> Hi Filippo,
> 
> Please help in the below mentioned issue. 
> 
> "--------- Forwarded 
> From: Nisha Agrawal <itlinkstonisha at gmail.com>
> Date: Tue, Oct 21, 2014 at 8:37 PM
> Subject: Re: [Pw_forum] Issue while executing QE-5.0 GPU
> To: PWSCF Forum <pw_forum at pwscf.org>
> 
> 
> Hi Filippo,
> 
> As per your suggestion I ran the not-accelerated version of QE for same input data , it worked well for me. 
> Following  is the contents of make.sys file, used for QE GPU compilation. Please let me know what are the other details
> required to help me in this issue.
> 
> #################################################################################################
> 
> MANUAL_DFLAGS  = -D__ISO_C_BINDING -D__DISABLE_CUDA_NEWD  -D__DISABLE_CUDA_ADDUSDENS
> DFLAGS         =  -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK -D__CUDA -D__PHIGEMM -D__OPENMP $(MANUAL_DFLAGS)
> FDFLAGS        = $(DFLAGS)
> 
> # IFLAGS = how to locate directories where files to be included are
> # In most cases, IFLAGS = -I../include
> 
> IFLAGS         = -I../include  -I/opt/app/espresso-5.0.2-gpu-14.03/espresso-5.0.2/GPU/..//phiGEMM/include -I/opt/CUDA-5.5/include
> 
> # MOD_FLAGS = flag used by f90 compiler to locate modules
> # Each Makefile defines the list of needed modules in MODFLAGS
> 
> MOD_FLAG      = -I
> 
> # Compilers: fortran-90, fortran-77, C
> # If a parallel compilation is desired, MPIF90 should be a fortran-90
> # compiler that produces executables for parallel execution using MPI
> # (such as for instance mpif90, mpf90, mpxlf90,...);
> # otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)
> # If you have a parallel machine but no suitable candidate for MPIF90,
> # try to specify the directory containing "mpif.h" in IFLAGS
> # and to specify the location of MPI libraries in MPI_LIBS
> 
> MPIF90         = mpiifort
> #F90           = ifort
> CC             = mpiicc
> F77            = mpiifort
> 
> # C preprocessor and preprocessing flags - for explicit preprocessing,
> # if needed (see the compilation rules above)
> # preprocessing flags must include DFLAGS and IFLAGS
> 
> CPP            = cpp
> CPPFLAGS       = -P -traditional $(DFLAGS) $(IFLAGS)
> 
> # compiler flags: C, F90, F77
> # C flags must include DFLAGS and IFLAGS
> # F90 flags must include MODFLAGS, IFLAGS, and FDFLAGS with appropriate syntax
> 
> CFLAGS         = -DMKL_ILP64 -O3 $(DFLAGS) $(IFLAGS)
> F90FLAGS       = $(FFLAGS) -nomodule -fpp $(FDFLAGS) $(IFLAGS) $(MODFLAGS)
> FFLAGS         = -i8 -O2 -assume byterecl -g -traceback -par-report0 -vec-report0
> 
> # compiler flags without optimization for fortran-77
> # the latter is NEEDED to properly compile dlamch.f, used by lapack
> 
> FFLAGS_NOOPT   = -i8 -O0 -assume byterecl -g -traceback
> 
> # compiler flag needed by some compilers when the main is not fortran
> # Currently used for Yambo
> 
> FFLAGS_NOMAIN   = -nofor_main
> 
> # Linker, linker-specific flags (if any)
> # Typically LD coincides with F90 or MPIF90, LD_LIBS is empty
> 
> LD             = mpiifort
> LDFLAGS        =  -ilp64
> LD_LIBS        = -L/opt/CUDA-5.5/lib64 -lcublas  -lcufft -lcudart
> 
> # External Libraries (if any) : blas, lapack, fft, MPI
> 
> # If you have nothing better, use the local copy :
> # BLAS_LIBS = /your/path/to/espresso/BLAS/blas.a
> # BLAS_LIBS_SWITCH = internal
> 
> BLAS_LIBS      = /opt/app/espresso-5.0.2-gpu-14.03/espresso-5.0.2/GPU/..//phiGEMM/lib/libphigemm.a  -L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64 -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm
> BLAS_LIBS_SWITCH = external
> 
> # If you have nothing better, use the local copy :
> # LAPACK_LIBS = /your/path/to/espresso/lapack-3.2/lapack.a
> # LAPACK_LIBS_SWITCH = internal
> # For IBM machines with essl (-D__ESSL): load essl BEFORE lapack !
> # remember that LAPACK_LIBS precedes BLAS_LIBS in loading order
> 
> # CBLAS is used in case the C interface for BLAS is missing (i.e. ACML)
> CBLAS_ENABLED = 0
> 
> LAPACK_LIBS    =
> LAPACK_LIBS_SWITCH = external
> 
> ELPA_LIBS_SWITCH = disabled
> SCALAPACK_LIBS = -lmkl_scalapack_ilp64 -lmkl_blacs_intelmpi_ilp64
> 
> # nothing needed here if the the internal copy of FFTW is compiled
> # (needs -D__FFTW in DFLAGS)
> 
> FFT_LIBS       = -L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64
> 
> # For parallel execution, the correct path to MPI libraries must
> # be specified in MPI_LIBS (except for IBM if you use mpxlf)
> 
> MPI_LIBS       =
> 
> # IBM-specific: MASS libraries, if available and if -D__MASS is defined in FDFLAGS
> 
> MASS_LIBS      =
> 
> # ar command and flags - for most architectures: AR = ar, ARFLAGS = ruv
> 
> AR             = ar
> ARFLAGS        = ruv
> 
> # ranlib command. If ranlib is not needed (it isn't in most cases) use
> # RANLIB = echo
> 
> RANLIB         = ranlib
> 
> # all internal and external libraries - do not modify
> 
> FLIB_TARGETS   = all
> 
> # CUDA section
> NVCC             = /opt/CUDA-5.5/bin/nvcc
> NVCCFLAGS        = -O3  -gencode arch=compute_20,code=sm_20 -gencode arch=compute_20,code=sm_21
> 
> PHIGEMM_INTERNAL = 1
> PHIGEMM_SYMBOLS  = 1
> MAGMA_INTERNAL   = 0
> 
> LIBOBJS        = ../flib/ptools.a ../flib/flib.a ../clib/clib.a ../iotk/src/libiotk.a
> LIBS           = $(SCALAPACK_LIBS) $(LAPACK_LIBS) $(FFT_LIBS) $(BLAS_LIBS) $(MPI_LIBS) $(MASS_LIBS) $(LD_LIBS)
> 
> # wget or curl - useful to download from network
> WGET = wget -O
> 
> ##################################################################################################
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> "Apologizing does not mean that you are wrong and the other one is right...
> It simply means that you value the relationship much more than your ego.."
> 
> On Tue, Oct 21, 2014 at 2:32 AM, Filippo Spiga <spiga.filippo at gmail.com> wrote:
> Dear Nisha,
> 
> the error as it is reported in your email does not give much details honestly. Make sure --with-gpu-arch=sm_20 for your GPU.
> 
> If it runs properly for small system on your machine but it dies for big systems then check the normal not-accelerated version of QE can run. If it runs and the problem appears only when GPU is turned on then we can try to investigate further.
> 
> HTH
> F
> 
> On Oct 17, 2014, at 5:27 AM, Nisha Agrawal <itlinkstonisha at gmail.com> wrote
> 
> > Hi,
> >
> >
> > I installed Quantam Espresso GPU v14.03.0, Intel compilers 13.0 and Intel MKL 11.0. We have NVIDIA GPU M2090 cards. The issue which I am facing is, for small input data it runs well, while
> > for big input data it got terminated with the following error.  Did I missed any compilation flag?
> > Does the  Quantam Espresso GPU v14.03.0 works well with INtel compiler. Please help
> >
> > forrtl: severe (174): SIGSEGV, segmentation fault occurred
> > Image              PC                Routine            Line        Source
> > libmkl_avx.so      00002AB729DF919A  Unknown               Unknown  Unknown
> > forrtl: severe (174): SIGSEGV, segmentation fault occurred
> > Image              PC                Routine            Line        Source
> > libmkl_avx.so      00002B3B05DF919A  Unknown               Unknown  Unknown
> > forrtl: severe (174): SIGSEGV, segmentation fault occurred
> > Image              PC                Routine            Line        Source
> > libmkl_avx.so      00002B5549DF919A  Unknown               Unknown  Unknown
> >
> > _______________________________________________
> > Pw_forum mailing list
> > Pw_forum at pwscf.org
> > http://pwscf.org/mailman/listinfo/pw_forum
> 
> --
> Mr. Filippo SPIGA, M.Sc.
> http://filippospiga.info ~ skype: filippo.spiga
> 
> «Nobody will drive us out of Cantor's paradise.» ~ David Hilbert
> 
> *****
> Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and may be privileged or otherwise protected from disclosure. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality and to advise the sender immediately of any error in transmission."
> 
> 
> 
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
> 
> 
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*****
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and may be privileged or otherwise protected from disclosure. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality and to advise the sender immediately of any error in transmission."






More information about the users mailing list