<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi Chong Wang,<br>
<br>
Perhaps it would be better to run ./configure with,<br>
./configure CC=icc CXX=icpc F90=ifort F77=ifort MPIF90=mpiifort
--with-scalapack=intel<br>
<br>
so that QE knows which compiler to use, verified with QE v5.3.0.<br>
<br>
Rolly<br>
<br>
<div class="moz-cite-prefix">On 05/16/2016 05:52 PM, Paolo Giannozzi
wrote:<br>
</div>
<blockquote
cite="mid:CAPMgbCvF88tA4XbPGvXoWxYOU02WAq_BCP+MT3DeODot52Di6Q@mail.gmail.com"
type="cite">
<div dir="ltr">On Mon, May 16, 2016 at 4:11 AM, Chong Wang <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:ch-wang@outlook.com" target="_blank">ch-wang@outlook.com</a>></span>
wrote:<br>
<br>
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">I
have checked my mpif90 calls gfortran so there's no
mix up. </div>
</div>
</blockquote>
<div><br>
</div>
<div>I am not sure it is possible to use gfortran together
with intel mpi. If you have intel mpi and mkl, presumably
you have the intel compiler as well.<br>
<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">Can
you kindly share with me your make.sys?</div>
</div>
</blockquote>
<div><br>
</div>
<div>it doesn't make sense to share a make.sys file unless
the software configuration is the same.<br>
<br>
</div>
<div>Paolo<br>
</div>
<div><br>
</div>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
Thanks in advance!
<p><br>
</p>
<p>Best!</p>
<p><br>
</p>
<p>Chong Wang</p>
<hr style="display:inline-block;width:98%">
<div dir="ltr"><font style="font-size:11pt"
face="Calibri, sans-serif" color="#000000"><b>From:</b>
<a moz-do-not-send="true"
href="mailto:pw_forum-bounces@pwscf.org"
target="_blank">pw_forum-bounces@pwscf.org</a>
<<a moz-do-not-send="true"
href="mailto:pw_forum-bounces@pwscf.org"
target="_blank">pw_forum-bounces@pwscf.org</a>>
on behalf of Paolo Giannozzi <<a
moz-do-not-send="true"
href="mailto:p.giannozzi@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:p.giannozzi@gmail.com">p.giannozzi@gmail.com</a></a>><br>
<b>Sent:</b> Monday, May 16, 2016 3:10 AM<br>
<b>To:</b> PWSCF Forum<br>
<b>Subject:</b> Re: [Pw_forum] mpi error using
pw.x</font>
<div> </div>
</div>
<div>
<div dir="ltr">Your make.sys shows clear signs of
mixup between ifort and gfortran. Please verify
that mpif90 calls ifort and not gfortran (or vice
versa). Configure issues a warning if this
happens.
<br>
<br>
I have successfully run your test on a machine
with some recent intel compiler and intel mpi. The
second output (run as mpirun -np 18 pw.x -nk
18....) is an example of what I mean by "type of
parallelization": there are many different
parallelization levels in QE. This is on k-points
(and runs faster in this case on less processors
than parallelization on plane waves).<br>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">Paolo<br>
<br>
</div>
<div class="gmail_extra">
<div class="gmail_quote">On Sun, May 15, 2016 at
6:01 PM, Chong Wang <span dir="ltr">
<<a moz-do-not-send="true"
href="mailto:ch-wang@outlook.com"
title="mailto:ch-wang@outlook.com
Ctrl+Click or tap to follow the link"
target="_blank">ch-wang@outlook.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">
<div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
<p>Hi,</p>
<p><br>
</p>
<p>I have done more test:</p>
<p>1. intel mpi 2015 yields segment
fault</p>
<p>2. intel mpi 2013 yields the same
error here</p>
<p>Did I do something wrong with
compiling? Here's my make.sys:</p>
<p><br>
</p>
<p><span># make.sys. Generated from <a
moz-do-not-send="true"
href="http://make.sys.in"
target="_blank">
make.sys.in</a> by configure.</span></p>
<p><span></span><br>
</p>
<p><span># compilation rules</span></p>
<p><span></span><br>
</p>
<p><span>.SUFFIXES :</span></p>
<p><span>.SUFFIXES : .o .c .f .f90</span></p>
<p><span></span><br>
</p>
<p><span># most fortran compilers can
directly preprocess c-like
directives: use</span></p>
<p><span># <span></span>$(MPIF90)
$(F90FLAGS) -c $<</span></p>
<p><span># if explicit preprocessing by
the C preprocessor is needed, use:</span></p>
<p><span># <span></span>$(CPP)
$(CPPFLAGS) $< -o $*.F90</span></p>
<p><span>#<span> </span>$(MPIF90)
$(F90FLAGS) -c $*.F90 -o $*.o</span></p>
<p><span># remember the tabulator in the
first column !!!</span></p>
<p><span></span><br>
</p>
<p><span>.f90.o:</span></p>
<p><span><span></span>$(MPIF90)
$(F90FLAGS) -c $<</span></p>
<p><span></span><br>
</p>
<p><span># .f.o and .c.o: do not modify</span></p>
<p><span></span><br>
</p>
<p><span>.f.o:</span></p>
<p><span><span></span>$(F77) $(FFLAGS)
-c $<</span></p>
<p><span></span><br>
</p>
<p><span>.c.o:</span></p>
<p><span><span></span>$(CC) $(CFLAGS)
-c $<</span></p>
<p><span></span><br>
</p>
<p><span></span><br>
</p>
<p><span></span><br>
</p>
<p><span># Top QE directory, not used in
QE but useful for linking QE libs
with plugins</span></p>
<p><span># The following syntax should
always point to TOPDIR:</span></p>
<p><span># $(dir $(abspath $(filter
%make.sys,$(MAKEFILE_LIST))))</span></p>
<p><span></span><br>
</p>
<p><span>TOPDIR =
/home/wangc/temp/espresso-5.4.0</span></p>
<p><span></span><br>
</p>
<p><span># DFLAGS = precompilation
options (possible arguments to -D
and -U)</span></p>
<p><span># used by the C
compiler and preprocessor</span></p>
<p><span># FDFLAGS = as DFLAGS, for the
f90 compiler</span></p>
<p><span># See include/defs.h.README for
a list of options and their meaning</span></p>
<p><span># With the exception of IBM
xlf, FDFLAGS = $(DFLAGS)</span></p>
<p><span># For IBM xlf, FDFLAGS is the
same as DFLAGS with separating
commas</span></p>
<p><span></span><br>
</p>
<p><span># MANUAL_DFLAGS = additional
precompilation option(s), if desired</span></p>
<p><span># BEWARE: it
does not work for IBM xlf! Manually
edit FDFLAGS</span></p>
<p><span>MANUAL_DFLAGS =</span></p>
<p><span>DFLAGS = -D__GFORTRAN
-D__STD_F95 -D__DFTI -D__MPI
-D__PARA -D__SCALAPACK</span></p>
<p><span>FDFLAGS = $(DFLAGS)
$(MANUAL_DFLAGS)</span></p>
<p><span></span><br>
</p>
<p><span># IFLAGS = how to locate
directories with *.h or *.f90 file
to be included</span></p>
<p><span># typically
-I../include
-I/some/other/directory/</span></p>
<p><span># the latter contains
.e.g. files needed by FFT libraries</span></p>
<p><span></span><br>
</p>
<p><span>IFLAGS = -I../include
-I/opt/intel/compilers_and_libraries_2016.3.210/linux/mkl/include</span></p>
<p><span></span><br>
</p>
<p><span># MOD_FLAGS = flag used by f90
compiler to locate modules</span></p>
<p><span># Each Makefile defines the
list of needed modules in MODFLAGS</span></p>
<p><span></span><br>
</p>
<p><span>MOD_FLAG = -I</span></p>
<p><span></span><br>
</p>
<p><span># Compilers: fortran-90,
fortran-77, C</span></p>
<p><span># If a parallel compilation is
desired, MPIF90 should be a
fortran-90</span></p>
<p><span># compiler that produces
executables for parallel execution
using MPI</span></p>
<p><span># (such as for instance mpif90,
mpf90, mpxlf90,...);</span></p>
<p><span># otherwise, an ordinary
fortran-90 compiler (f90, g95,
xlf90, ifort,...)</span></p>
<p><span># If you have a parallel
machine but no suitable candidate
for MPIF90,</span></p>
<p><span># try to specify the directory
containing "mpif.h" in IFLAGS</span></p>
<p><span># and to specify the location
of MPI libraries in MPI_LIBS</span></p>
<p><span></span><br>
</p>
<p><span>MPIF90 = mpif90</span></p>
<p><span>#F90 = gfortran</span></p>
<p><span>CC = cc</span></p>
<p><span>F77 = gfortran</span></p>
<p><span></span><br>
</p>
<p><span># C preprocessor and
preprocessing flags - for explicit
preprocessing,</span></p>
<p><span># if needed (see the
compilation rules above)</span></p>
<p><span># preprocessing flags must
include DFLAGS and IFLAGS</span></p>
<p><span></span><br>
</p>
<p><span>CPP = cpp</span></p>
<p><span>CPPFLAGS = -P -C
-traditional $(DFLAGS) $(IFLAGS)</span></p>
<p><span></span><br>
</p>
<p><span># compiler flags: C, F90, F77</span></p>
<p><span># C flags must include DFLAGS
and IFLAGS</span></p>
<p><span># F90 flags must include
MODFLAGS, IFLAGS, and FDFLAGS with
appropriate syntax</span></p>
<p><span></span><br>
</p>
<p><span>CFLAGS = -O3 $(DFLAGS)
$(IFLAGS)</span></p>
<p><span>F90FLAGS = $(FFLAGS) -x
f95-cpp-input $(FDFLAGS) $(IFLAGS)
$(MODFLAGS)</span></p>
<p><span>FFLAGS = -O3 -g</span></p>
<p><span></span><br>
</p>
<p><span># compiler flags without
optimization for fortran-77</span></p>
<p><span># the latter is NEEDED to
properly compile dlamch.f, used by
lapack</span></p>
<p><span></span><br>
</p>
<p><span>FFLAGS_NOOPT = -O0 -g</span></p>
<p><span></span><br>
</p>
<p><span># compiler flag needed by some
compilers when the main program is
not fortran</span></p>
<p><span># Currently used for Yambo</span></p>
<p><span></span><br>
</p>
<p><span>FFLAGS_NOMAIN = </span></p>
<p><span></span><br>
</p>
<p><span># Linker, linker-specific flags
(if any)</span></p>
<p><span># Typically LD coincides with
F90 or MPIF90, LD_LIBS is empty</span></p>
<p><span></span><br>
</p>
<p><span>LD = mpif90</span></p>
<p><span>LDFLAGS = -g -pthread</span></p>
<p><span>LD_LIBS = </span></p>
<p><span></span><br>
</p>
<p><span># External Libraries (if any) :
blas, lapack, fft, MPI</span></p>
<p><span></span><br>
</p>
<p><span># If you have nothing better,
use the local copy :</span></p>
<p><span># BLAS_LIBS =
/your/path/to/espresso/BLAS/blas.a</span></p>
<p><span># BLAS_LIBS_SWITCH = internal</span></p>
<p><span></span><br>
</p>
<p><span>BLAS_LIBS =
-lmkl_gf_lp64 -lmkl_sequential
-lmkl_core</span></p>
<p><span>BLAS_LIBS_SWITCH = external</span></p>
<p><span></span><br>
</p>
<p><span># If you have nothing better,
use the local copy :</span></p>
<p><span># LAPACK_LIBS =
/your/path/to/espresso/lapack-3.2/lapack.a</span></p>
<p><span># LAPACK_LIBS_SWITCH = internal</span></p>
<p><span># For IBM machines with essl
(-D__ESSL): load essl BEFORE lapack
!</span></p>
<p><span># remember that LAPACK_LIBS
precedes BLAS_LIBS in loading order</span></p>
<p><span></span><br>
</p>
<p><span>LAPACK_LIBS = </span></p>
<p><span>LAPACK_LIBS_SWITCH = external</span></p>
<p><span></span><br>
</p>
<p><span>ELPA_LIBS_SWITCH = disabled</span></p>
<p><span>SCALAPACK_LIBS =
-lmkl_scalapack_lp64
-lmkl_blacs_intelmpi_lp64</span></p>
<p><span></span><br>
</p>
<p><span># nothing needed here if the
the internal copy of FFTW is
compiled</span></p>
<p><span># (needs -D__FFTW in DFLAGS)</span></p>
<p><span></span><br>
</p>
<p><span>FFT_LIBS = </span></p>
<p><span></span><br>
</p>
<p><span># For parallel execution, the
correct path to MPI libraries must</span></p>
<p><span># be specified in MPI_LIBS
(except for IBM if you use mpxlf)</span></p>
<p><span></span><br>
</p>
<p><span>MPI_LIBS = </span></p>
<p><span></span><br>
</p>
<p><span># IBM-specific: MASS libraries,
if available and if -D__MASS is
defined in FDFLAGS</span></p>
<p><span></span><br>
</p>
<p><span>MASS_LIBS = </span></p>
<p><span></span><br>
</p>
<p><span># ar command and flags - for
most architectures: AR = ar, ARFLAGS
= ruv</span></p>
<p><span></span><br>
</p>
<p><span>AR = ar</span></p>
<p><span>ARFLAGS = ruv</span></p>
<p><span></span><br>
</p>
<p><span># ranlib command. If ranlib is
not needed (it isn't in most cases)
use</span></p>
<p><span># RANLIB = echo</span></p>
<p><span></span><br>
</p>
<p><span>RANLIB = ranlib</span></p>
<p><span></span><br>
</p>
<p><span># all internal and external
libraries - do not modify</span></p>
<p><span></span><br>
</p>
<p><span>FLIB_TARGETS = all</span></p>
<p><span></span><br>
</p>
<p><span>LIBOBJS = ../clib/clib.a
../iotk/src/libiotk.a</span></p>
<p><span>LIBS =
$(SCALAPACK_LIBS) $(LAPACK_LIBS)
$(FFT_LIBS) $(BLAS_LIBS) $(MPI_LIBS)
$(MASS_LIBS) $(LD_LIBS)</span></p>
<p><span></span><br>
</p>
<p><span># wget or curl - useful to
download from network</span></p>
<p><span>WGET = wget -O</span></p>
<p><span></span><br>
</p>
<p><span># Install directory - not
currently used</span></p>
<p><span>PREFIX = /usr/local</span></p>
<br>
<p>Cheers!</p>
<p><br>
</p>
<p>Chong Wang</p>
</div>
<hr style="display:inline-block;width:98%">
<div dir="ltr"><font
style="font-size:11pt" face="Calibri,
sans-serif" color="#000000"><span><b>From:</b>
<a moz-do-not-send="true"
href="mailto:pw_forum-bounces@pwscf.org"
target="_blank">pw_forum-bounces@pwscf.org</a>
<<a moz-do-not-send="true"
href="mailto:pw_forum-bounces@pwscf.org"
target="_blank">pw_forum-bounces@pwscf.org</a>>
on behalf of Paolo Giannozzi <<a
moz-do-not-send="true"
href="mailto:p.giannozzi@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:p.giannozzi@gmail.com">p.giannozzi@gmail.com</a></a>><br>
</span><b>Sent:</b> Sunday, May 15,
2016 8:28:26 PM
<div>
<div><br>
<b>To:</b> PWSCF Forum<br>
<b>Subject:</b> Re: [Pw_forum] mpi
error using pw.x</div>
</div>
</font>
<div> </div>
</div>
<div>
<div>
<div>
<div dir="ltr">
<div>It looks like a compiler/mpi
bug, since there is nothing
special in your input and in
your execution, unless you find
evidence that the problem is
reproducible on other
compiler/mpi versions.<br>
<br>
</div>
Paolo<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Sun,
May 15, 2016 at 10:11 AM, Chong
Wang <span dir="ltr">
<<a moz-do-not-send="true"
href="mailto:ch-wang@outlook.com" target="_blank">ch-wang@outlook.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
<p>Hi,</p>
<p><br>
</p>
<p>Thank you for
replying.</p>
<p><br>
</p>
<p>More details:</p>
<p><br>
</p>
<p>1. input data:</p>
<div>&control</div>
<div>
calculation='scf'</div>
<div>
restart_mode='from_scratch',</div>
<div> pseudo_dir =
'../pot/',</div>
<div> outdir='./out/'</div>
<div> prefix='BaTiO3'</div>
<div>/</div>
<div>&system</div>
<div> nbnd = 48</div>
<div> ibrav = 0, nat
= 5, ntyp = 3</div>
<div> ecutwfc = 50</div>
<div>
occupations='smearing',
smearing='gaussian',
degauss=0.02 </div>
<div>/</div>
<div>&electrons</div>
<div> conv_thr =
1.0e-8</div>
<div>/</div>
<div>ATOMIC_SPECIES</div>
<div> Ba 137.327
Ba.pbe-mt_fhi.UPF</div>
<div> Ti 204.380
Ti.pbe-mt_fhi.UPF</div>
<div> O 15.999
O.pbe-mt_fhi.UPF</div>
<div>ATOMIC_POSITIONS</div>
<div> Ba
0.0000000000000000
0.0000000000000000
0.0000000000000000</div>
<div> Ti
0.5000000000000000
0.5000000000000000
0.4819999933242795</div>
<div> O
0.5000000000000000
0.5000000000000000
0.0160000007599592</div>
<div> O
0.5000000000000000
-0.0000000000000000
0.5149999856948849</div>
<div> O
0.0000000000000000
0.5000000000000000
0.5149999856948849</div>
<div>K_POINTS
(automatic)</div>
<div>11 11 11 0 0 0</div>
<div>CELL_PARAMETERS
{angstrom}</div>
<div>3.999800000000001
0.000000000000000
0.000000000000000</div>
<div>0.000000000000000
3.999800000000001
0.000000000000000</div>
<div>0.000000000000000
0.000000000000000
4.018000000000000</div>
<div><br>
</div>
2. number of processors:</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">I
tested 24 cores and 8
cores, and both
yield the same result.</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif"><br>
</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">3.
<span
style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:16px">type
of parallelization:</span></div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">I
don't know your meaning.
I execute pw.x by:</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif"><span>mpirun
-np 24 pw.x < <a
moz-do-not-send="true" href="http://BTO.scf.in" target="_blank">BTO.scf.in</a>
>> output</span></div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif"><span><br>
</span></div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif"><span>'which
mpirun' output:</span></div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif"><span>
<div>/opt/intel/compilers_and_libraries_2016.3.210/linux/mpi/intel64/bin/mpirun</div>
</span></div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif"><br>
</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif"><span></span>4.
when the error occurs:</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">in
the middle of the run.
The last a few lines of
the output is</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
<div> total cpu time
spent up to now is
32.9 secs</div>
<div><br>
</div>
<div> total energy
=
-105.97885119 Ry</div>
<div> Harris-Foulkes
estimate =
-105.99394457 Ry</div>
<div> estimated scf
accuracy <
0.03479229 Ry</div>
<div><br>
</div>
<div> iteration # 7
ecut= 50.00 Ry
beta=0.70</div>
<div> Davidson
diagonalization with
overlap</div>
<div> ethr =
1.45E-04, avg # of
iterations = 2.7</div>
<div><br>
</div>
<div> total cpu time
spent up to now is
37.3 secs</div>
<div><br>
</div>
<div> total energy
=
-105.99039982 Ry</div>
<div> Harris-Foulkes
estimate =
-105.99025175 Ry</div>
<div> estimated scf
accuracy <
0.00927902 Ry</div>
<div><br>
</div>
<div> iteration # 8
ecut= 50.00 Ry
beta=0.70</div>
<div> Davidson
diagonalization with
overlap</div>
<div><br>
</div>
</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">5.
Error message:</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">Something
like:</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
<div>Fatal error in
PMPI_Cart_sub: Other
MPI error, error
stack:</div>
<div>PMPI_Cart_sub(242)...................:
MPI_Cart_sub(comm=0xc400fcf3,
remain_dims=0x7ffc03ae5f38,
comm_new=0x7ffc03ae5e90)
failed</div>
<div>PMPI_Cart_sub(178)...................: </div>
<div>MPIR_Comm_split_impl(270)............: </div>
<div>MPIR_Get_contextid_sparse_group(1330):
Too many communicators
(0/16384 free on this
process; ignore_id=0)</div>
<div>Fatal error in
PMPI_Cart_sub: Other
MPI error, error
stack:</div>
<div>PMPI_Cart_sub(242)...................:
MPI_Cart_sub(comm=0xc400fcf3,
remain_dims=0x7ffd10080408,
comm_new=0x7ffd10080360)
failed</div>
<div>PMPI_Cart_sub(178)...................: </div>
<div><br>
</div>
Cheers!</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif"><br>
</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">Chong</div>
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
<div
style="color:rgb(0,0,0)">
<hr
style="display:inline-block;width:98%">
<div dir="ltr"><font
style="font-size:11pt"
face="Calibri,
sans-serif"
color="#000000"><b>From:</b>
<a
moz-do-not-send="true"
href="mailto:pw_forum-bounces@pwscf.org" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:pw_forum-bounces@pwscf.org">pw_forum-bounces@pwscf.org</a></a>
<<a
moz-do-not-send="true"
href="mailto:pw_forum-bounces@pwscf.org" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:pw_forum-bounces@pwscf.org">pw_forum-bounces@pwscf.org</a></a>>
on behalf of Paolo
Giannozzi <<a
moz-do-not-send="true"
href="mailto:p.giannozzi@gmail.com" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:p.giannozzi@gmail.com">p.giannozzi@gmail.com</a></a>><br>
<b>Sent:</b>
Sunday, May 15,
2016 3:43 PM<br>
<b>To:</b> PWSCF
Forum<br>
<b>Subject:</b>
Re: [Pw_forum] mpi
error using pw.x</font>
<div> </div>
</div>
<div>
<div dir="ltr">
<div>
<div>
<div>Please
tell us what
is wrong and
we will fix
it.<br>
<br>
</div>
Seriously:
nobody can
answer your
question
unless you
specify, as a
strict
minimum, input
data, number
of processors
and type of
parallelization
that trigger
the error, and
where the
error occurs
(at startup,
later, in the
middle of the
run, ...).<br>
<br>
</div>
<div>Paolo<br>
</div>
</div>
</div>
<div
class="gmail_extra"><br>
<div
class="gmail_quote">On
Sun, May 15,
2016 at 7:50 AM,
Chong Wang <span
dir="ltr">
<<a
moz-do-not-send="true"
href="mailto:ch-wang@outlook.com" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:ch-wang@outlook.com">ch-wang@outlook.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0
0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">
<div dir="ltr">
<div
style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
<div>I
compiled
quantum
espresso 5.4
with intel mpi
and mkl 2016
update 3.</div>
<div><br>
</div>
<div>However,
when I ran
pw.x the
following
errors were
reported:</div>
<div><br>
</div>
<div>...</div>
<div>MPIR_Get_contextid_sparse_group(1330):
Too many
communicators
(0/16384 free
on this
process;
ignore_id=0)</div>
<div>Fatal
error in
PMPI_Cart_sub:
Other MPI
error, error
stack:</div>
<div>PMPI_Cart_sub(242)...................:
MPI_Cart_sub(comm=0xc400fcf3,
remain_dims=0x7ffde1391dd8,
comm_new=0x7ffde1391d30)
failed</div>
<div>PMPI_Cart_sub(178)...................: </div>
<div>MPIR_Comm_split_impl(270)............: </div>
<div>MPIR_Get_contextid_sparse_group(1330):
Too many
communicators
(0/16384 free
on this
process;
ignore_id=0)</div>
<div>Fatal
error in
PMPI_Cart_sub:
Other MPI
error, error
stack:</div>
<div>PMPI_Cart_sub(242)...................:
MPI_Cart_sub(comm=0xc400fcf3,
remain_dims=0x7ffc02ad7eb8,
comm_new=0x7ffc02ad7e10)
failed</div>
<div>PMPI_Cart_sub(178)...................: </div>
<div>MPIR_Comm_split_impl(270)............: </div>
<div>MPIR_Get_contextid_sparse_group(1330):
Too many
communicators
(0/16384 free
on this
process;
ignore_id=0)</div>
<div>Fatal
error in
PMPI_Cart_sub:
Other MPI
error, error
stack:</div>
<div>PMPI_Cart_sub(242)...................:
MPI_Cart_sub(comm=0xc400fcf3,
remain_dims=0x7fffb24e60f8,
comm_new=0x7fffb24e6050)
failed</div>
<div>PMPI_Cart_sub(178)...................: </div>
<div>MPIR_Comm_split_impl(270)............: </div>
<div>MPIR_Get_contextid_sparse_group(1330):
Too many
communicators
(0/16384 free
on this
process;
ignore_id=0)</div>
<br>
<p>I googled
and found out
this might be
caused by
hitting os
limits of
number of
opened files.
However, After
I increased
number of
opened files
per process
from 1024 to
40960, the
error
persists.</p>
<p><br>
</p>
<p>What's
wrong here?</p>
<p><br>
</p>
<p>Chong Wang</p>
<p>Ph. D.
candidate</p>
<p>Institute
for Advanced
Study,
Tsinghua
University,
Beijing,
100084</p>
</div>
</div>
<br>
_______________________________________________<br>
Pw_forum
mailing list<br>
<a
moz-do-not-send="true"
href="mailto:Pw_forum@pwscf.org" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a></a><br>
<a
moz-do-not-send="true"
href="http://pwscf.org/mailman/listinfo/pw_forum" rel="noreferrer"
title="http://pwscf.org/mailman/listinfo/pw_forum
Ctrl+Click or
tap to follow
the link"
target="_blank"><a class="moz-txt-link-freetext" href="http://pwscf.org/mailman/listinfo/pw_forum">http://pwscf.org/mailman/listinfo/pw_forum</a></a><span><font
color="#888888"><br>
</font></span></blockquote>
<span></span></div>
<span><font
color="#888888"><br>
<br
clear="all">
<br>
-- <br>
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>Paolo
Giannozzi,
Dip. Scienze
Matematiche
Informatiche e
Fisiche,<br>
Univ. Udine,
via delle
Scienze 208,
33100 Udine,
Italy<br>
Phone <a
moz-do-not-send="true"
href="tel:%2B39-0432-558216" value="+390432558216" target="_blank">+39-0432-558216</a>,
fax
<a
moz-do-not-send="true"
href="tel:%2B39-0432-558222" value="+390432558222" target="_blank">+39-0432-558222</a><br>
<br>
</div>
</div>
</div>
</div>
</div>
</font></span></div>
</div>
</div>
</div>
</div>
</div>
<br>
_______________________________________________<br>
Pw_forum mailing list<br>
<a moz-do-not-send="true"
href="mailto:Pw_forum@pwscf.org"
target="_blank">Pw_forum@pwscf.org</a><br>
<a moz-do-not-send="true"
href="http://pwscf.org/mailman/listinfo/pw_forum"
rel="noreferrer"
target="_blank">http://pwscf.org/mailman/listinfo/pw_forum</a><br>
</blockquote>
</div>
<br>
<br clear="all">
<span class="HOEnZb"><font
color="#888888">
<br>
-- <br>
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>Paolo Giannozzi,
Dip. Scienze
Matematiche
Informatiche e
Fisiche,<br>
Univ. Udine, via
delle Scienze 208,
33100 Udine, Italy<br>
Phone <a
moz-do-not-send="true"
href="tel:%2B39-0432-558216" value="+390432558216" target="_blank">+39-0432-558216</a>,
fax
<a
moz-do-not-send="true"
href="tel:%2B39-0432-558222" value="+390432558222" target="_blank">+39-0432-558222</a><br>
<br>
</div>
</div>
</div>
</div>
</div>
</font></span></div>
<span class="HOEnZb"><font
color="#888888">
</font></span></div>
<span class="HOEnZb"><font
color="#888888">
</font></span></div>
<span class="HOEnZb"><font
color="#888888">
</font></span></div>
<span class="HOEnZb"><font color="#888888">
</font></span></div>
<span class="HOEnZb"><font color="#888888">
<br>
_______________________________________________<br>
Pw_forum mailing list<br>
<a moz-do-not-send="true"
href="mailto:Pw_forum@pwscf.org"
target="_blank">Pw_forum@pwscf.org</a><br>
<a moz-do-not-send="true"
href="http://pwscf.org/mailman/listinfo/pw_forum"
rel="noreferrer" target="_blank">http://pwscf.org/mailman/listinfo/pw_forum</a><br>
</font></span></blockquote>
<span class="HOEnZb"><font color="#888888">
</font></span></div>
<span class="HOEnZb"><font color="#888888">
<br>
<br clear="all">
<br>
-- <br>
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>Paolo Giannozzi, Dip. Scienze
Matematiche Informatiche e
Fisiche,<br>
Univ. Udine, via delle Scienze
208, 33100 Udine, Italy<br>
Phone <a moz-do-not-send="true"
href="tel:%2B39-0432-558216"
value="+390432558216"
target="_blank">+39-0432-558216</a>,
fax <a moz-do-not-send="true"
href="tel:%2B39-0432-558222"
value="+390432558222"
target="_blank">+39-0432-558222</a><br>
<br>
</div>
</div>
</div>
</div>
</div>
</font></span></div>
</div>
</div>
</div>
</div>
<br>
_______________________________________________<br>
Pw_forum mailing list<br>
<a moz-do-not-send="true" href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a><br>
<a moz-do-not-send="true"
href="http://pwscf.org/mailman/listinfo/pw_forum"
rel="noreferrer" target="_blank">http://pwscf.org/mailman/listinfo/pw_forum</a><br>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div class="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>Paolo Giannozzi, Dip. Scienze Matematiche
Informatiche e Fisiche,<br>
Univ. Udine, via delle Scienze 208, 33100 Udine,
Italy<br>
Phone +39-0432-558216, fax +39-0432-558222<br>
<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Pw_forum mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a>
<a class="moz-txt-link-freetext" href="http://pwscf.org/mailman/listinfo/pw_forum">http://pwscf.org/mailman/listinfo/pw_forum</a></pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
PhD. Research Fellow,
Dept. of Physics & Materials Science,
City University of Hong Kong
Tel: +852 3442 4000
Fax: +852 3442 0538</pre>
</body>
</html>