<div dir="ltr"><div>Dear Ryan<br><br>the band parallelization for exact-exchange has undergone some reshuffling. See this paper: Taylor A. Barnes et al., "Improved treatment of exact exchange in Quantum ESPRESSO", Computer Physics Communications, May 31, 2017. From what I understand, the starting calculation (no exact exchange) is performed as usual, with no band parallelization, while parallellization on pairs of bands is applied to the exact-exchange calculation. I am forwarding to Taylor and Thorsten who may know better than me (Thorsten: I haven't forgotten your patch of two months ago! it is in the pipeline of things to be done)<br><br></div>Paolo<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 10, 2017 at 9:55 PM, Ryan McAvoy <span dir="ltr"><<a href="mailto:mcavor11@gmail.com" target="_blank">mcavor11@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hello,<div><br></div><div>I am Ryan L. McAvoy, a PhD student in Giulia Galli's group. I am trying to use the band parallelization for hybrids in QE 6.1 and I am finding unexpected behavior. I have created a test case on a small test system (the zinc dimer) to illustrate the following behavior. I have attached those files.</div><div><br></div><div><ol><li>The output informing the user that there is band-parallelization for a hybrid functional appears to have been broken as changing the number of band groups with -nbgrp does not trigger the output that should occur from subroutine parallel_info() in environment.f90, which would indicate it believes nbgrp to be 1. This may be triggered by the statement "mp_start_bands(1 ,...." at line 94 of mp_global.f90 as that I have checked that "nband_" is the correct value after "CALL get_command_line()" in mp_global.f90 </li><li>The number of planewaves appears to be distributed over all of the processors even at large numbers of "nband_". I have checked that this is more than an output error by printing the lda(npw) at each run of h_psi and it exactly conforms to what one would expect by dividing the total number of planewaves by the number of processors(plus a factor of 1/2 for gamma tricks).</li><li>Behavior 2 prevents me from scaling to as large a number of processors as I could with QE 6.0. As using QE 6.0 hybrids on C60, I could run on 8000+ processors on the BGQ machine Cetus@Argonne National Lab but with QE 6.1 the output says that it has run out of planewaves even with a large number of band groups(I have demonstrated this behavior below on the zinc dimer on 640 Intel processors to aid reproducibility)</li></ol><div>Is #2 the intended behavior for this new parallelization method?</div><div><br></div><div>Thank you for your time and attention to this matter,</div><div>Ryan L. McAvoy</div><div><br></div><div>..............................<wbr>..............................<wbr>..............................<wbr>..............................<wbr>..............................<wbr>.............................</div><div><br></div><div><br></div><div>My run scripts are of the form</div></div><div><br></div><div><div>module load mkl/11.2</div><div>module load intelmpi/5.0+intel-15.0</div><div><br></div><div>QE_BIN_DIR=PUTPATHHERE/qe-6.1/<wbr>bin</div><div><br></div><div>export MPI_TASKS=$SLURM_NTASKS<br></div><div><br></div><div>exe=${QE_BIN_DIR}/pw.x</div><div><br></div><div>export OMP_NUM_THREADS=1</div><div><br></div><div>nband=10</div><div><div>mpirun -n $MPI_TASKS ${exe} -nb $nband < ${fileVal}.in > ${fileVal}_nband${nband}_<wbr>nproc${MPI_TASKS}.out</div></div></div><div><br></div><div><br></div></div>
<br>______________________________<wbr>_________________<br>
Q-e-developers mailing list<br>
<a href="mailto:Q-e-developers@qe-forge.org">Q-e-developers@qe-forge.org</a><br>
<a href="http://qe-forge.org/mailman/listinfo/q-e-developers" rel="noreferrer" target="_blank">http://qe-forge.org/mailman/<wbr>listinfo/q-e-developers</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Paolo Giannozzi, <span>Dip. Scienze Matematiche Informatiche e Fisiche</span>,<br>Univ. Udine, via delle Scienze 208, 33100 Udine, Italy<br>Phone +39-0432-558216, fax +39-0432-558222<br><br></div></div></div></div></div>
</div>