<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">Dear all,<br>
</div>
<div class="moz-cite-prefix">Just some quick remarks:</div>
<div class="moz-cite-prefix">OpenMP in QE is added on top of MPI, so
with just one 32 / 24 cores, it is more or less useless to use
OpenMP at all, this is true for almost any code with hybrid
parallelization.<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">You claim, you have played around with
OpenMPI/OpenMP settings but nothing changed, which settings?</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">If you don't share your system, at
least the computational relevant parameters, like FFT grid size,
number of plane wave coefficients, type of pseudo potentials, type
of calculation etc nobody can guess what kind of performance you
may or may not expect.<br>
</div>
<div class="moz-cite-prefix">E.g. if may also that your simulations
are just too small, if they run just for seconds on a quad core
machine, nothing will change going to more cores.<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">The debug environment switch for MKL
depends on the version of the MKL itself, more recent versions
need more 'sophisticated' workarounds, but if you google you will
find them, too.</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">Last point, if you don't specify
exactly how your workstations are configured, nobody knows what
you are talking about, same for compilers etc, they all carry a
version number.<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">best,</div>
<div class="moz-cite-prefix">Tobias</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">On 11/17/20 7:24 PM, Pamela Whitfield
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAOxScGNvz7i6X=4=9bTMYM8-GpQx=-ryeg8B4sCWBGaVmMfyXg@mail.gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div>Michal<br>
</div>
<div><br>
</div>
<div>I have a very similar use-case and looked into many of the
same issues when I got my Threadripper 3960X system at the
beginning of the year to supplement my old dual-Xeon setup. In
the past few days I've been revisiting compilation as I got
hold of a Quadro GV100 for GPU acceleration of my
optimizations.</div>
<div><br>
</div>
<div>Basically it seems as though code compiled for Zen2 either
can't handle code compiled for both MPI and OpenMP at all, or
does so poorly even when it runs. </div>
<div>Best performance for pw.x on v6.5 (I've been playing with
GIPAW and there's no 6.6 compatible version yet) has been with
a simple gcc OpenMPI compilation without openmp threading and
with about 20 MPI cores on my 24 core CPU. Compiling with GCC
or PGI compiler made little difference, although only the more
recent PGI compilers will have zen2 optimization.
</div>
<div>I get little benefit from Intel MKL over
openblas/lapack/fftw3 even with the debug tweaks, etc. <br>
</div>
<div>Puget Systems numbers with other programs suggest that
OpenMP only performs better than OpenMPI with Threadripper but
I find the opposite with QE.</div>
<div>I did try disabling hyperthreading in the BIOS but that
made no difference to the performance.<br>
</div>
<div><br>
</div>
<div>GPU compilation really shows the issue with MPI/OpenMP
clashing. With the Xeons I could compile code with MKL that
would run well on a Quadro K6000 while offloading to the CPU
with MPI when needed. It could still be a compiler issue (have
to use PGI with the GPU version) but it just doesn't work with
the 3960X, and some things don't thread well with pure OpenMP
(e..g dftd3 versus dftd2) so I'll still need to use separately
compiled versions of 6.5 for different problems.</div>
<div><br>
</div>
<div>BTW with a dual CPU system you may benefit from pinning
threads to particular CPUs - it works on the dual Xeon in any
case. My Threadripper balances the load across the cores in a
pretty dynamic manner and that's on a single socket.</div>
<div><br>
</div>
<div>Best regards</div>
<div>Pam Whitfield<br>
</div>
<div><br>
</div>
<div>Independent Consultant<br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>
Message: 1<br>
Date: Mon, 16 Nov 2020 15:19:04 +0100<br>
From: Michal Husak <<a href="mailto:Michal.Husak@vscht.cz"
target="_blank" moz-do-not-send="true">Michal.Husak@vscht.cz</a>><br>
To: Quantum ESPRESSO users Forum <<a
href="mailto:users@lists.quantum-espresso.org"
target="_blank" moz-do-not-send="true">users@lists.quantum-espresso.org</a>><br>
Subject: [QE-users] Sub optimal performance on 32 core AMD
machine<br>
Message-ID: <<a
href="mailto:fe59d3a8-2ace-4f66-a5c6-e83b01387c61@cln92.vscht.cz"
target="_blank" moz-do-not-send="true">fe59d3a8-2ace-4f66-a5c6-e83b01387c61@cln92.vscht.cz</a>><br>
Content-Type: text/plain; charset="UTF-8"; format=flowed<br>
<br>
I had purchased a new PC with 2x 16 core AMD EPYC processors .
64 <br>
cores with hyper threading ...<br>
I was hoping my QM programs (Quantum Espresso, CASTEP) will
run on the new<br>
system faster, than on my old 4 core i7 Intel machine (8 year
old) ....<br>
<br>
To my great surprise, the opposite is almost true :-(.<br>
My main task is scf and geometry optimization of middle sized
organic <br>
molecular crystals (abut 100 C,H,N per unit cell) ...<br>
<br>
I was playing with OpenMPI/OpenMP setup changes ...<br>
I was playing with the secret MKL_DEBUG_CPU_TYPE=5 parameter <br>
(responsible for slow run of Intel MKL compiled code on AMD)
...<br>
<br>
Nothing helps, the best speed is obteined when I use only 4
cores <br>
(OpenMPI or OpenMP - results similar) ...<br>
Using 16 or 32 cores gives almost no benefit ...<br>
The CPU load for run on 1/4/816/32 coresponds to the nubmer of
CPU <br>
set = they try to do something ...<br>
<br>
Any idea what I should check, try optimize ?<br>
<br>
Maybe the bottleneck is memory access, not CPU power (I have
128 <br>
GB almost not used RAM) ?<br>
<br>
Michal Husak<br>
<br>
UCT Prague<br>
<br>
<br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
Quantum ESPRESSO is supported by MaX (<a class="moz-txt-link-abbreviated" href="http://www.max-centre.eu">www.max-centre.eu</a>)
users mailing list <a class="moz-txt-link-abbreviated" href="mailto:users@lists.quantum-espresso.org">users@lists.quantum-espresso.org</a>
<a class="moz-txt-link-freetext" href="https://lists.quantum-espresso.org/mailman/listinfo/users">https://lists.quantum-espresso.org/mailman/listinfo/users</a></pre>
</blockquote>
<p><br>
</p>
<pre class="moz-signature" cols="72">--
M.Sc. Tobias Klöffel
=======================================================
HPC (High Performance Computing) group
Erlangen Regional Computing Center(RRZE)
Friedrich-Alexander-Universität Erlangen-Nürnberg
Martensstr. 1
91058 Erlangen
Room: 1.133
Phone: +49 (0) 9131 / 85 - 20101
=======================================================
E-mail: <a class="moz-txt-link-abbreviated" href="mailto:tobias.kloeffel@fau.de">tobias.kloeffel@fau.de</a></pre>
</body>
</html>