<div dir="ltr"><div><div>Dear Quantum Espresso Developers and Users,<br><br><br></div>I'm running a phonon calculation parallelizing over the representations/q vectors. For my cluster, there are 24 cores per node. I want to use as many nodes as possible to speed up the calculation.<br><br></div><div>I set the number of parallelizations to be the number of nodes,<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">mpirun -np NUMBER_OF_NODESx24 ph.x -nimage NUMBER_OF_NODES <br></blockquote></div><div><div><br><br></div><div>If I only use 4 nodes (4 images), 8 nodes ( 8 images), the calculation will be finished successfully. However, more than 8 nodes, say 16 or 32 nodes, are used, each time running the calculation, such error will be given,<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">Not diagonalizing because representation xx is not done<br clear="all"></blockquote></div><div><div><br></div><div>Btw, I want to reduce I/O overhead by discarding `wf_collect` option, but the following way doesn't work (the number of processors and pools for scf calculation is the same to that in phonon calculation)<br><br></div><div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">mpirun -np NUMBER_OF_NODESx24 pw.x <br></blockquote><br></div><div>ph.x complains,<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">Error in routine phq_readin (1):pw.x run with a different number of processors. <br>Use wf_collect=.true.<br></blockquote></div><div><br>The beginning output of pw.x,<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"> Parallel version (MPI), running on 96 processors<br> R & G space division: proc/nbgrp/npool/nimage = 96<br> Waiting for input...<br> Reading input from standard input<br></blockquote><br>and the beginning output of ph.x,<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"> Parallel version (MPI), running on 96 processors<br> path-images division: nimage = 4<br> R & G space division: proc/nbgrp/npool/nimage = 24<br></blockquote><br></div><div>Do I miss something? I know it's inefficient to let pw.x use so many processors, but it seems to be one requirement to let ph.x and pw.x have the same number of processors.<br></div><div><br></div><div>Thank you!<br><br></div><div>-- <br><div class="gmail_signature"><div dir="ltr"><div><i>Best regards,</i></div><font color="#00cccc"><i>Coiby</i></font><div><font color="#00cccc"><br></font></div></div></div>
</div></div></div></div>