<div dir="ltr"><div><div><div>Hi Ryan,<br><br></div> As Paolo said, band parallelization in QE 6.1 has been modified significantly. One of the main changes is that the local part of the calculation (basically, everything outside of PW/src/exx.f90) is performed as though there is only a single band group. This change enables the code to avoid duplicating all of the work associated with the local part of the calculation across each band group, which improves the efficiency of the parallelization, especially when running at scale.<br><br></div><div> As you observed, the new behavior in QE 6.1 is that the number of plane waves is distributed across all processors, independent of the number of band groups. When running with a very large number of processors, QE 6.1 may complain that there are not enough plane-waves and exit, just as would be the case if you were running using a local or semi-local functional. There is no fundamental reason why the code can't be modified to run with some processors having no plane-waves - in fact, I have done some work to create a patch that does exactly this. It isn't finished yet, but when it is I could share it with you if you would like.<br><br></div><div> That having been said, I am curious about whether you are actually benefiting from running on 8000 processors. From your output files for the zinc dimer, it looks like your best walltime with 20 procs (39 s) is about the same as your best walltime with 320 procs, with a significant fraction of the parallelization inefficiencies being associated with the diagonalization. I would be interested in knowing the following:<br><br> 1. How big are your production runs? Can you show me an example input file?<br><br></div><div> 2. Using QE 6.0, how does the walltime of an 8000+ processor calculation compare with the walltime of a 4000 or 2000 processor calculation?<br></div><br></div> 3. For a given number of processors and the optimal choices of -nb, how do your QE 6.0 walltimes compare with your QE 6.1 walltimes? Keep in mind that in QE 6.1 you can use task groups alongside band groups. As a rough rule-of-thumb, I would suggest setting -ntg to -nb/8 for any QE 6.1 calculations.<br><div><br></div><div> Also, one more observation: You don't appear to be running with ACE. If you recompile with ACE you are likely to see a significant speedup (often 5-10x) for your hybrid calculations.<br></div><div><br><div>Best,<br></div><div>Taylor<br></div><div><br><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jul 12, 2017 at 6:09 AM, Paolo Giannozzi <span dir="ltr"><<a href="mailto:paolo.giannozzi@uniud.it" target="_blank">paolo.giannozzi@uniud.it</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Dear Ryan<br><br>the band parallelization for exact-exchange has undergone some reshuffling. See this paper: Taylor A. Barnes et al., "Improved treatment of exact exchange in Quantum ESPRESSO", Computer Physics Communications, May 31, 2017. From what I understand, the starting calculation (no exact exchange) is performed as usual, with no band parallelization, while parallellization on pairs of bands is applied to the exact-exchange calculation. I am forwarding to Taylor and Thorsten who may know better than me (Thorsten: I haven't forgotten your patch of two months ago! it is in the pipeline of things to be done)<br><br></div>Paolo<br></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="h5">On Mon, Jul 10, 2017 at 9:55 PM, Ryan McAvoy <span dir="ltr"><<a href="mailto:mcavor11@gmail.com" target="_blank">mcavor11@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr">Hello,<div><br></div><div>I am Ryan L. McAvoy, a PhD student in Giulia Galli's group. I am trying to use the band parallelization for hybrids in QE 6.1 and I am finding unexpected behavior. I have created a test case on a small test system (the zinc dimer) to illustrate the following behavior. I have attached those files.</div><div><br></div><div><ol><li>The output informing the user that there is band-parallelization for a hybrid functional appears to have been broken as changing the number of band groups with -nbgrp does not trigger the output that should occur from subroutine parallel_info() in environment.f90, which would indicate it believes nbgrp to be 1. This may be triggered by the statement "mp_start_bands(1 ,...." at line 94 of mp_global.f90 as that I have checked that "nband_" is the correct value after "CALL get_command_line()" in mp_global.f90 </li><li>The number of planewaves appears to be distributed over all of the processors even at large numbers of "nband_". I have checked that this is more than an output error by printing the lda(npw) at each run of h_psi and it exactly conforms to what one would expect by dividing the total number of planewaves by the number of processors(plus a factor of 1/2 for gamma tricks).</li><li>Behavior 2 prevents me from scaling to as large a number of processors as I could with QE 6.0. As using QE 6.0 hybrids on C60, I could run on 8000+ processors on the BGQ machine Cetus@Argonne National Lab but with QE 6.1 the output says that it has run out of planewaves even with a large number of band groups(I have demonstrated this behavior below on the zinc dimer on 640 Intel processors to aid reproducibility)</li></ol><div>Is #2 the intended behavior for this new parallelization method?</div><div><br></div><div>Thank you for your time and attention to this matter,</div><div>Ryan L. McAvoy</div><div><br></div><div>..............................<wbr>..............................<wbr>..............................<wbr>..............................<wbr>..............................<wbr>.............................</div><div><br></div><div><br></div><div>My run scripts are of the form</div></div><div><br></div><div><div>module load mkl/11.2</div><div>module load intelmpi/5.0+intel-15.0</div><div><br></div><div>QE_BIN_DIR=PUTPATHHERE/qe-6.1/<wbr>bin</div><div><br></div><div>export MPI_TASKS=$SLURM_NTASKS<br></div><div><br></div><div>exe=${QE_BIN_DIR}/pw.x</div><div><br></div><div>export OMP_NUM_THREADS=1</div><div><br></div><div>nband=10</div><div><div>mpirun -n $MPI_TASKS ${exe} -nb $nband < ${fileVal}.in > ${fileVal}_nband${nband}_npro<wbr>c${MPI_TASKS}.out</div></div></div><div><br></div><div><br></div></div>
<br></div></div>______________________________<wbr>_________________<br>
Q-e-developers mailing list<br>
<a href="mailto:Q-e-developers@qe-forge.org" target="_blank">Q-e-developers@qe-forge.org</a><br>
<a href="http://qe-forge.org/mailman/listinfo/q-e-developers" rel="noreferrer" target="_blank">http://qe-forge.org/mailman/li<wbr>stinfo/q-e-developers</a><br>
<br></blockquote></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><br>-- <br><div class="m_6062071680226194984gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Paolo Giannozzi, <span>Dip. Scienze Matematiche Informatiche e Fisiche</span>,<br>Univ. Udine, via delle Scienze 208, 33100 Udine, Italy<br>Phone <a href="tel:+39%200432%20558216" value="+390432558216" target="_blank">+39-0432-558216</a>, fax <a href="tel:+39%200432%20558222" value="+390432558222" target="_blank">+39-0432-558222</a><br><br></div></div></div></div></div>
</font></span></div>
</blockquote></div><br></div>