<div><div dir="auto">Dear Pietro, dear Giuseppe,</div></div><div dir="auto">Thanks, indeed my comment on speedup was hasty and didn’t make sense. Thanks for the suggestions regarding increasing number of mpi processes and not using nt.</div><div dir="auto">Best,</div><div dir="auto">Michal</div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, 30 May 2019 at 16:18, Pietro Delugas <<a href="mailto:pdelugas@sissa.it">pdelugas@sissa.it</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p><font size="-1">Dear Michal <br>
</font></p>
<p><font size="-1">3.7x with respect to what ? <br>
</font></p>
<p><font size="-1">your cut and paste refers to the wall time and
the total cpu time per mpi task, they differ because you are
using thread parallelism. <br>
</font></p>
<p><font size="-1">if you don't have memory issues I would try to
increase the number of mpi processes decreasing the number of
thread and usually when the number on MPI tasks is smaller than
the dimesions of the fft grid it is better to avoid using nt. <br>
</font></p>
<p><font size="-1">Hope it helps <br>
</font></p>
<p><font size="-1">regards <br></font></p></div><div text="#000000" bgcolor="#FFFFFF"><p><font size="-1">
</font></p>
<p><font size="-1">Pietro<br>
</font></p>
<div class="m_-5753682005103184333moz-cite-prefix">On 30/05/19 16:42, Michal Krompiec
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hello,<br>
<div>I am trying to run a calculation on a 2D slab with a bit of
adsorbate (119 atoms in total), and I would like to
parallelize it as much as possible. I am using a 3 3 1
Monkhorst-Pack grid (so I have 5 k-points). </div>
<div>I tried using -npool 5 -nt 4 using 20 MPI processes and 5
threads per process but, as it seems, the speedup was just
3.7x:</div>
<div> PWSCF : 1d 4h27m CPU 7h43m WALL<br>
</div>
<div>What could have gone wrong, is there anything "obvious" I
can do to diagnose the problem? I am using QE 6.4rc, compiled
with gcc and OpenMPI, without ELPA.<br>
</div>
<div><br>
</div>
<div>Best regards,</div>
<div><br>
</div>
<div>Michal Krompiec</div>
<div><br>
</div>
<div>Merck KGaA and University of Southampton</div>
</div>
<br>
<fieldset class="m_-5753682005103184333mimeAttachmentHeader"></fieldset>
<pre class="m_-5753682005103184333moz-quote-pre">_______________________________________________
Quantum ESPRESSO is supported by MaX (<a class="m_-5753682005103184333moz-txt-link-abbreviated" href="http://www.max-centre.eu/quantum-espresso" target="_blank">www.max-centre.eu/quantum-espresso</a>)
users mailing list <a class="m_-5753682005103184333moz-txt-link-abbreviated" href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a>
<a class="m_-5753682005103184333moz-txt-link-freetext" href="https://lists.quantum-espresso.org/mailman/listinfo/users" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></pre>
</blockquote>
</div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu/quantum-espresso" rel="noreferrer" target="_blank">www.max-centre.eu/quantum-espresso</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div></div>