<div dir="ltr">Hello Pamela,<div>I don't know whether it is clear or not, so I apologize if I repeat obvious concepts.</div><div>I just bought a Threadripper 3990X with 64 core 128 threads. As far as I remember the 3960X should have 24 core - 48 threads.</div><div>It is very very important to don't use more than 24 cores on 3960X . Simply forget about hyperthreading. No need to disable it in the BIOS, but simply count the real number of cores.</div><div><br></div><div>I use gcc 9.3.0 and the new gcc 10 should be even better for AMD cpus.</div><div>With openblas 0.3.12 I found that my 8-cores home Ryzen 3800X is fast as a Xeon 12 cores E5-2680 using quantum espresso 6.4.1</div><div>Carlo</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno mar 17 nov 2020 alle ore 19:24 Pamela Whitfield <<a href="mailto:whitfieldps1@gmail.com">whitfieldps1@gmail.com</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Michal<br></div><div><br></div><div>I have a very similar use-case and looked into many of the same issues when I got my Threadripper 3960X system at the beginning of the year to supplement my old dual-Xeon setup. In the past few days I've been revisiting compilation as I got hold of a Quadro GV100 for GPU acceleration of my optimizations.</div><div><br></div><div>Basically it seems as though code compiled for Zen2 either can't handle code compiled for both MPI and OpenMP at all, or does so poorly even when it runs. </div><div>Best performance for pw.x on v6.5 (I've been playing with GIPAW and there's no 6.6 compatible version yet) has been with a simple gcc OpenMPI compilation without openmp threading and with about 20 MPI cores on my 24 core CPU. Compiling with GCC or PGI compiler made little difference, although only the more recent PGI compilers will have zen2 optimization.
</div><div>I get little benefit from Intel MKL over openblas/lapack/fftw3 even with the debug tweaks, etc. <br></div><div>Puget Systems numbers with other programs suggest that OpenMP only performs better than OpenMPI with Threadripper but I find the opposite with QE.</div><div>I did try disabling hyperthreading in the BIOS but that made no difference to the performance.<br></div><div><br></div><div>GPU compilation really shows the issue with MPI/OpenMP clashing. With the Xeons I could compile code with MKL that would run well on a Quadro K6000 while offloading to the CPU with MPI when needed. It could still be a compiler issue (have to use PGI with the GPU version) but it just doesn't work with the 3960X, and some things don't thread well with pure OpenMP (e..g dftd3 versus dftd2) so I'll still need to use separately compiled versions of 6.5 for different problems.</div><div><br></div><div>BTW with a dual CPU system you may benefit from pinning threads to particular CPUs - it works on the dual Xeon in any case. My Threadripper balances the load across the cores in a pretty dynamic manner and that's on a single socket.</div><div><br></div><div>Best regards</div><div>Pam Whitfield<br></div><div><br></div><div>Independent Consultant<br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div>
Message: 1<br>
Date: Mon, 16 Nov 2020 15:19:04 +0100<br>
From: Michal Husak <<a href="mailto:Michal.Husak@vscht.cz" target="_blank">Michal.Husak@vscht.cz</a>><br>
To: Quantum ESPRESSO users Forum <<a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a>><br>
Subject: [QE-users] Sub optimal performance on 32 core AMD machine<br>
Message-ID: <<a href="mailto:fe59d3a8-2ace-4f66-a5c6-e83b01387c61@cln92.vscht.cz" target="_blank">fe59d3a8-2ace-4f66-a5c6-e83b01387c61@cln92.vscht.cz</a>><br>
Content-Type: text/plain; charset="UTF-8"; format=flowed<br>
<br>
I had purchased a new PC with 2x 16 core AMD EPYC processors . 64 <br>
cores with hyper threading ...<br>
I was hoping my QM programs (Quantum Espresso, CASTEP) will run on the new<br>
system faster, than on my old 4 core i7 Intel machine (8 year old) ....<br>
<br>
To my great surprise, the opposite is almost true :-(.<br>
My main task is scf and geometry optimization of middle sized organic <br>
molecular crystals (abut 100 C,H,N per unit cell) ...<br>
<br>
I was playing with OpenMPI/OpenMP setup changes ...<br>
I was playing with the secret MKL_DEBUG_CPU_TYPE=5 parameter <br>
(responsible for slow run of Intel MKL compiled code on AMD) ...<br>
<br>
Nothing helps, the best speed is obteined when I use only 4 cores <br>
(OpenMPI or OpenMP - results similar) ...<br>
Using 16 or 32 cores gives almost no benefit ...<br>
The CPU load for run on 1/4/816/32 coresponds to the nubmer of CPU <br>
set = they try to do something ...<br>
<br>
Any idea what I should check, try optimize ?<br>
<br>
Maybe the bottleneck is memory access, not CPU power (I have 128 <br>
GB almost not used RAM) ?<br>
<br>
Michal Husak<br>
<br>
UCT Prague<br>
<br>
<br>
</div></div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><pre>------------------------------------------------------------
Prof. Carlo Nervi <a href="mailto:carlo.nervi@unito.it" target="_blank">carlo.nervi@unito.it</a> <a>Tel:+39</a> 0116707507/8
Fax: +39 0116707855 - Dipartimento di Chimica, via
P. Giuria 7, 10125 Torino, Italy. <a href="http://lem.ch.unito.it/" target="_blank">http://lem.ch.unito.it/</a><br></pre><pre><pre style="white-space:pre-wrap"><b>ICCC2020 has been postponed at 2022</b></pre><pre style="white-space:pre-wrap">ICCC 2022 28 August - 2 September 2022, Rimini, Italy: <a href="http://www.iccc2020.com/" style="color:rgb(17,85,204)" target="_blank">http://www.iccc2020.com</a>
International Conference on Coordination Chemistry (ICCC 2022)</pre></pre></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>