<div dir="ltr"><div>I am not convinced that the problem you mention is the same as yours. In order to figure out if the problem arises from Scalapack, you should remove __SCALAPACK from DFLAGS and recompile: the code will use (much slower) internal routines for parallel dense-matrix diagonalization. You may also try to run with no dense-matrix diagonalization (-nd 1, not sure it is honored though). <br><br>You should also report how your are running your code and, if using exotic parallelizations like "band groups" (-nb N), check if the problem you have is related to its usage<br><br></div>Paolo<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Dec 1, 2016 at 11:37 PM, Ryan Herchig <span dir="ltr"><<a href="mailto:rch@mail.usf.edu" target="_blank">rch@mail.usf.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div>Hello all,<br><br></div> I am running pw.x in Quantum Espresso version 5.4.0, however if I try and run the job using more than 2 nodes with 8 cores each I receive the following error :<br><br>Fatal error in PMPI_Group_incl: Invalid rank, error stack:<br>PMPI_Group_incl(185)..........<wbr>...: MPI_Group_incl(group=<wbr>0x88000004, n=4, ranks=0x2852700, new_group=0x7fff57564668) failed<br>MPIR_Group_check_valid_ranks(<wbr>253): Invalid rank in rank array at index 3; value is 33 but must be in the range 0 to 31<br><br></div><div>I am building/running on a local cluster maintained by the University I attend. The specifications for the nodes are 2 x Intel Xeon E5-2670 (Eight Core)
32GB
QDR InfiniBand.
I found in a previous thread<br><br><a href="https://www.mail-archive.com/pw_forum@pwscf.org/msg27702.html" target="_blank">https://www.mail-archive.com/<wbr>pw_forum@pwscf.org/msg27702.<wbr>html</a><br><br></div>involving espresso-5.3.0 where another user seemed to be experiencing the same issue where it was determined that "The problem is related to the obscure hacks needed to convince Scalapack to work in a subgroup of processors." The suggestion in this post was to change a line in Modules/mp_global.f90 and recompile. However I am running spin-collinear vdW-DF calculations which requires at least version 5.4.0 I believe and the lines in the subroutine found in mp_global.f90 has changed; furthermore following the suggestion of the previous post does not fix the issue. It instead produces the following compilation error :<br><br>mp_global.f90(97): error #6631: A non-optional actual argument must be present when invoking a procedure with an explicit interface. [NPARENT_COMM]<br> CALL mp_start_diag ( ndiag_, intra_BGRP_comm )<br>---------^<br>mp_global.f90(97): error #6631: A non-optional actual argument must be present when invoking a procedure with an explicit interface. [MY_PARENT_ID]<br> CALL mp_start_diag ( ndiag_, intra_BGRP_comm )<br>---------^<br>compilation aborted for mp_global.f90 (code 1)<br><br><br>Does this problem with the ScaLAPACK libraries persist in the newer versions or could these errors have a separate origin? Possibly something I am doing wrong during the build? I have included the make.sys that I am using for "make pw" below. If the error is due to the ScaLAPACK libraries, is there a workaround which could allow the use of additional processors when running calculations? Thank you in advance.<br><br></div> Thank you, Ryan Herchig<br><br></div> University of South Florida, Department of Physics<br><div><div><br><br>.SUFFIXES :<br>.SUFFIXES : .o .c .f .f90<br><br>.f90.o:<br> $(MPIF90) $(F90FLAGS) -c $<<br><br># .f.o and .c.o: do not modify<br><br>.f.o:<br> $(F77) $(FFLAGS) -c $<<br><br>.c.o:<br> $(CC) $(CFLAGS) -c $<<br><br>TOPDIR = /work/r/rch/espresso-5.4.0<br><br>MANUAL_DFLAGS =<br>DFLAGS = -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK<br>FDFLAGS = $(DFLAGS) $(MANUAL_DFLAGS)<br><br>IFLAGS = -I../include -I/apps/intel/2015/composer_<wbr>xe_2015.3.187/mkl/include:/<wbr>apps/intel/2015/composer_xe_<wbr>2015.3.187/tbb/include<br><br>MOD_FLAG = -I<br><br>MPIF90 = mpif90<br>#F90 = ifort<br>CC = icc<br>F77 = ifort<br><br>CPP = cpp<br>CPPFLAGS = -P -C -traditional $(DFLAGS) $(IFLAGS)<br><br>CFLAGS = -O3 $(DFLAGS) $(IFLAGS)<br>F90FLAGS = $(FFLAGS) -nomodule -fpp $(FDFLAGS) $(IFLAGS) $(MODFLAGS)<br>FFLAGS = -O2 -assume byterecl -g -traceback<br><br>FFLAGS_NOOPT = -O0 -assume byterecl -g -traceback<br><br>FFLAGS_NOMAIN = -nofor_main<br><br>LD = mpif90<br>LDFLAGS = <br>LD_LIBS = <br><br>BLAS_LIBS = -lmkl_intel_lp64 -lmkl_sequential -lmkl_core<br>BLAS_LIBS_SWITCH = external<br><br>LAPACK_LIBS = -L/apps/intel/2015/composer_<wbr>xe_2015.3.187/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core<br>LAPACK_LIBS_SWITCH = external<br><br>ELPA_LIBS_SWITCH = disabled<br>SCALAPACK_LIBS = -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_ilp64<br><br>FFT_LIBS = -L/apps/intel/2015/composer_<wbr>xe_2015.3.187/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core<br><br>MPI_LIBS = <br><br>MASS_LIBS = <br><br>AR = ar<br>ARFLAGS = ruv<br><br>RANLIB = ranlib<br><br>FLIB_TARGETS = all<br><br>LIBOBJS = ../clib/clib.a ../iotk/src/libiotk.a<br>LIBS = $(SCALAPACK_LIBS) $(LAPACK_LIBS) $(FFT_LIBS) $(BLAS_LIBS) $(MPI_LIBS) $(MASS_LIBS) $(LD_LIBS)<br><br>WGET = wget -O<br><br>PREFIX = /work/r/rch/espresso-5.4.0/EXE<br><br><br><br><br><br></div></div></div>
<br>______________________________<wbr>_________________<br>
Pw_forum mailing list<br>
<a href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a><br>
<a href="http://pwscf.org/mailman/listinfo/pw_forum" rel="noreferrer" target="_blank">http://pwscf.org/mailman/<wbr>listinfo/pw_forum</a><br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,<br>Univ. Udine, via delle Scienze 208, 33100 Udine, Italy<br>Phone +39-0432-558216, fax +39-0432-558222<br><br></div></div></div></div></div>
</div>