<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Ciao Gabriele,<br>
<br>
ok, these is a memory leakage bug,<br>
ELPA communicators must be destroyed after the diagonalization has
been performed.<br>
I'm wondering if there is an ELPA driver to do this (similar to
GET_ELPA_ROW_COL_COMMS)<br>
instead of calling directly MPI subroutines,<br>
<br>
carlo<br>
<br>
Il 23/10/2013 10:39, Gabriele Sclauzero ha scritto:<br>
</div>
<blockquote cite="mid:8BB3D70C-EF80-4365-8F2D-0F9E6CF99BE9@epfl.ch"
type="cite">
<div>
<div text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">Ciao Carlo,<br>
<br>
thanks for the hint. As far as I can see, the only point
where this kind of problem could arise is here<br>
<br>
>>> grep -A6 __ELPA ./Modules/dspev_drv.f90</div>
<div class="moz-cite-prefix"><br>
--<br>
#ifdef __ELPA<br>
INTEGER :: nprow,npcol,my_prow,
my_pcol,mpi_comm_rows, mpi_comm_cols<br>
#endif <br>
<br>
IF( SIZE( s, 1 ) /= lds ) &<br>
CALL errore( ' pdsyevd_drv ', ' wrong matrix leading
dimension ', 1 )<br>
!<br>
--<br>
#ifdef __ELPA<br>
CALL BLACS_Gridinfo(ortho_cntx,nprow, npcol,
my_prow,my_pcol)<br>
CALL GET_ELPA_ROW_COL_COMMS(ortho_comm, my_prow,
my_pcol,mpi_comm_rows, mpi_comm_cols)<br>
CALL SOLVE_EVP_REAL(n, n, s, lds, w, vv, lds
,nb ,mpi_comm_rows, mpi_comm_cols)<br>
IF( tv ) s = vv<br>
IF( ALLOCATED( vv ) ) DEALLOCATE( vv )<br>
#else<br>
--<br>
<br>
<br>
because the subroutine GET_ELPA_ROW_COL_COMMS internally
calls MPI_Comm_split, which seems to be the MPI subroutine
which crashed (according to the error message here below).<br>
Since the communicators mpi_comm_rows and mpi_comm_cols are
just used by ELPA in this subroutine, would it be safe to
call<br>
MPI_Comm_free ( mpi_comm_rows, ierr )<br>
MPI_Comm_free ( mpi_comm_cols, ierr )<br>
just before #else ?<br>
<br>
If you don't see any potential problem, I would try this
solution.<br>
<br>
<br>
Best,<br>
<br>
Gabriele<br>
<br>
<br>
<br>
On 10/23/2013 10:06 AM, Carlo Cavazzoni wrote:<br>
</div>
</div>
<blockquote type="cite">
<div text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix"> </div>
<blockquote cite="mid:52678395.9020205@cineca.it"
type="cite">
<div class="moz-cite-prefix">Ciao Gabriele,<br>
<br>
as far I understand with ELPA (which is build on a much
deeper communicator hierarchy wrt SCALAPACK)<br>
you hit a MPI environment limit (number of
communicators).<br>
Even if the limit could be somehow increased, it sound
like <br>
somewhere a communicator is created and not destroyed in
the relax work-flow.<br>
I guess it could be something related to temporary
communicator created to distribute atoms to processors.<br>
I don't remember exactly, but it can be checked easily
searching for communicator initialization calls<br>
<br>
carlo<br>
<br>
Il 22/10/2013 17:35, Gabriele Sclauzero ha scritto:<br>
</div>
<blockquote
cite="mid:2B08A377-8F56-46FD-A57E-A1A526DF011B@epfl.ch"
type="cite">
<pre wrap="">Dear all,
I've recently started using ELPA in place of Scalapack for large scale calculations and I indeed see a very good improvement of the performance.
Unfortunately, I've found a recurrent problem when running relax calculations (not due to the relaxation itself, though, I believe). The program crashes because of some MPI-related issues. The error message from the system looks as follows:
Abort(1) on node 1732 (rank 1732 in comm 1140850688): Fatal error in PMPI_Comm_split: Other MPI error, error stack:
PMPI_Comm_split(474).................: MPI_Comm_split(comm=0xc4000004, color=2, key=27, new_comm=0x1fffff7478) failed
PMPI_Comm_split(456).................:
MPIR_Comm_split_impl(228)............:
MPIR_Get_contextid_sparse_group(1071): Too many communicators
The crash happens during the Davidson diagonalization after a few ionic cycles of the relaxation (after roughly ~200 Davidson diagonalizations). It happens both with v.5.0.3 and with the latest SVN revision. If I compile with Scalapack in place of ELPA, both versions work fine (but are slower...).
Compilation details (see also attached make.sys):
BG/Q machine, XLF 14.1, XLC 12.1, ESSL 5.1, Scalapack 2.0.2
./configure --enable-openmp --with-elpa
The calculation was run on 256 nodes with the following command line:
runjob -n 2048 -p 8 --envs OMP_NUM_THREADS=4 --cwd [...] : /home/sclauzer/Codes/espresso/5.0.3_ELPA/bin/pw.x -nband 1 -npool 1 -ndiag 1024 -ntg 4 -in [...]
The system is quite large, a slab with ~1400 atoms and ~3000 bands. I don't know if the problem would show up for a smaller system or on fewer nodes, but I can try to provide a smaller example in order to investigate the problem more easily. Hopefully, this is something simple that the ELPA/Scalapack and BGQ experts among you can spot at a glance.
Best,
Gabriele
</pre>
</blockquote>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
<div>
<span class="Apple-style-span" style="border-collapse: separate;
color: rgb(0, 0, 0); font-family: Helvetica; font-style:
normal; font-variant: normal; font-weight: normal;
letter-spacing: normal; line-height: normal; orphans: 2;
text-align: -webkit-auto; text-indent: 0px; text-transform:
none; white-space: normal; widows: 2; word-spacing: 0px;
-webkit-border-horizontal-spacing: 0px;
-webkit-border-vertical-spacing: 0px;
-webkit-text-decorations-in-effect: none;
-webkit-text-size-adjust: auto; -webkit-text-stroke-width:
0px; font-size: medium; "><span class="Apple-style-span"
style="border-collapse: separate; color: rgb(0, 0, 0);
font-variant: normal; letter-spacing: normal; line-height:
normal; orphans: 2; text-indent: 0px; text-transform: none;
white-space: normal; widows: 2; word-spacing: 0px;
-webkit-border-horizontal-spacing: 0px;
-webkit-border-vertical-spacing: 0px;
-webkit-text-decorations-in-effect: none;
-webkit-text-size-adjust: auto; -webkit-text-stroke-width:
0px; ">
<div style="font-family: Helvetica; font-size: medium;
font-weight: normal; font-style: normal; "><span
class="Apple-style-span" style="color: rgb(126, 126,
126); font-size: 16px; font-style: italic; "><br
class="Apple-interchange-newline">
§ Gabriele Sclauzero, EPFL SB ITP CSEA</span></div>
<div><font class="Apple-style-span" color="#7E7E7E"><i><span
class="Apple-style-span" style="font-style: normal;
"> PH H2 462,</span><span
class="Apple-converted-space"> </span><span
class="Apple-style-span" style="font-style: normal;
">Station 3, CH-1015 Lausanne</span></i></font></div>
</span></span>
</div>
<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Q-e-developers mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Q-e-developers@qe-forge.org">Q-e-developers@qe-forge.org</a>
<a class="moz-txt-link-freetext" href="http://qe-forge.org/mailman/listinfo/q-e-developers">http://qe-forge.org/mailman/listinfo/q-e-developers</a>
</pre>
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="72">--
Ph.D. Carlo Cavazzoni
SuperComputing Applications and Innovation Department
CINECA - Via Magnanelli 6/3, 40033 Casalecchio di Reno (Bologna)
Tel: +39 051 6171411 Fax: +39 051 6132198
<a class="moz-txt-link-abbreviated" href="http://www.cineca.it">www.cineca.it</a></pre>
</body>
</html>