<html>Dear Brad,<br />I can only confirm what Paolo and Michal suggested.<br />Even with infiniband the efficiency of the FFT parallelization drastically decreases at each new node, WHATEVER THE CODE (not only QE) or the librairy.<br />For SLURM jobs, if you ask 2 nodes of 16 cores, the first 16 are indexed 1 to 16 and 16 last 17-32, that is exactlly the same repartition implemented in QE for k points, bands or images parallelization.<br />Thanks to this, I never face trouble concerning the way the mpi processes are spread to the cores when the number of pools (or images) equals the number of nodes.<br />For these reason, except for large supercells at gamma only, I always do npool=nodes<br /><br />Regards,<br />Antoine Jay<br />LAAS CNRS<br />Toulouse France<br /><br />Le Vendredi, Novembre 06, 2020 01:04 CET, Michal Krompiec via users <users@lists.quantum-espresso.org> a écrit:<br /> <blockquote type="cite" cite="CAOWoSSNChOH+pk6h5E1K+NOGMj7Mz_xFquwhe-KtArwg_q_uoQ@mail.gmail.com"><div dir="auto">Dear Brad,</div><div dir="auto">Fast communications means here Infiniband or other RDMA. Make sure your MPI uses RDMA, I’ve seen systems where it isn’t enabled by default. That said, if you use k-point parallelization you can get away with gigabit ethernet as Paolo mentioned.</div><div dir="auto">Best wishes,</div><div dir="auto">Michal Krompiec</div><div dir="auto">Merck KGaA </div><div> <div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Nov 5, 2020 at 11:40 PM Baer, Bradly via users <<a href="mailto:users@lists.quantum-espresso.org">users@lists.quantum-espresso.org</a>> wrote:</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Paolo,</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">I believe the nodes I am using have gigabit connections. There are additional nodes that have 10 or 25 gigabit connections but I don't think I would land on one of them without specifically requesting them. What communication speed would be appropriate for QE's needs?</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">I also did consider trying to manually set the parallelization but I don't currently know enough about SLURM to identify each node and ensure that all 16 cores assigned from a pool are on the same node. I will keep it in mind though as a possible future solution.</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Thanks,</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Brad</div><div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div id="m_7302475323095056452Signature"><div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">--------------------------------------------------------</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Bradly Baer</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><span style="font-family:Calibri,Arial,Helvetica,sans-serif;background-color:rgb(255,255,255);display:inline!important">Graduate Research Assistant, Walker Lab</span></div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><span style="font-family:Calibri,Arial,Helvetica,sans-serif;background-color:rgb(255,255,255);display:inline!important">Interdisciplinary Materials Science</span></div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Vanderbilt University</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div></div></div></div><div id="m_7302475323095056452appendonsend"> </div><hr style="display:inline-block;width:98%" /><div id="m_7302475323095056452divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>From:</b> Paolo Giannozzi <<a target="_blank" href="mailto:p.giannozzi@gmail.com">p.giannozzi@gmail.com</a>><br /><b>Sent:</b> Thursday, November 5, 2020 3:54 PM<br /><b>To:</b> Baer, Bradly <bradly.b.baer@Vanderbilt.Edu>; Quantum ESPRESSO users Forum <<a target="_blank" href="mailto:users@lists.quantum-espresso.org">users@lists.quantum-espresso.org</a>></font></div></div><div dir="ltr"><div id="m_7302475323095056452divRplyFwdMsg" dir="ltr"><br /><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>Subject:</b> Re: [QE-users] Running efficiently on multiple nodes</font><div> </div></div><div><div dir="ltr"><div>Are there fast communications between the two nodes? if not, the parallel distributed 3D FFT will be very slow (note the time taken by fft_scatt_yz). You might find convenient to exploit k-point parallelization, that requires much less communication: for instance, "mpirun -n 32 pw.x -nk 2" (2 pools of 16 processors, each pool performing parallel FFT), but you have to figure out a way to convince the first pool of 16 processors on node 1, the second on node 2 (or vice versa, as long as FFT parallelization happens inside a node, k-point parallelization across nodes )</div><div> </div><div>Paolo</div></div> <div><div dir="ltr">On Thu, Nov 5, 2020 at 7:29 PM Baer, Bradly via users <<a target="_blank" href="mailto:users@lists.quantum-espresso.org">users@lists.quantum-espresso.org</a>> wrote:</div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Paolo,</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Thank you for your suggestion. I will add recompiling to move to 6.6 to my to do list. For now, I corrected the pseudopotential files as you indicated and the calculation ran successfully. It has become slightly faster, but still much slower than running on a single node (3:30s vs 0:30s). Is there more that I should be doing to improve performance or is my test problem too small to see the benefits of parallelization?</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Thanks,</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Brad </div><div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div id="m_7302475323095056452x_gmail-m_-4475069457157334453Signature"><div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">--------------------------------------------------------</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Bradly Baer</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><span style="font-family:Calibri,Arial,Helvetica,sans-serif;background-color:rgb(255,255,255);display:inline">Graduate Research Assistant, Walker Lab</span></div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><span style="font-family:Calibri,Arial,Helvetica,sans-serif;background-color:rgb(255,255,255);display:inline">Interdisciplinary Materials Science</span></div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Vanderbilt University</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div></div></div></div><div id="m_7302475323095056452x_gmail-m_-4475069457157334453appendonsend"> </div><hr style="display:inline-block;width:98%" /><div id="m_7302475323095056452x_gmail-m_-4475069457157334453divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>From:</b> users <<a target="_blank" href="mailto:users-bounces@lists.quantum-espresso.org">users-bounces@lists.quantum-espresso.org</a>> on behalf of Paolo Giannozzi <<a target="_blank" href="mailto:p.giannozzi@gmail.com">p.giannozzi@gmail.com</a>><br /><b>Sent:</b> Thursday, November 5, 2020 10:01 AM<br /><b>To:</b> Quantum ESPRESSO users Forum <<a target="_blank" href="mailto:users@lists.quantum-espresso.org">users@lists.quantum-espresso.org</a>><br /><b>Subject:</b> Re: [QE-users] Running efficiently on multiple nodes</font><div> </div></div><div><div dir="ltr"><div dir="ltr">On Thu, Nov 5, 2020 at 3:05 PM Baer, Bradly <<a target="_blank" href="mailto:bradly.b.baer@vanderbilt.edu">bradly.b.baer@vanderbilt.edu</a>> wrote:</div><div><div> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><div><i>Pseudo file Ga.pbe-dn-kjpaw_psl.1.0.0.UPF has been fixed on the fly.</i></div><div><i>To avoid this message in the future, permanently fix </i></div><div><i> your pseudo files following these instructions: </i></div><div><i><a target="_blank" href="https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2FQEF%2Fq-e%2Fblob%2Fmaster%2Fupftools%2Fhow_to_fix_upf.md&data=04%7C01%7Cbradly.b.baer%40vanderbilt.edu%7Ca843f95dcbc04eb71ed508d881d5735b%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C637402101063299076%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=e33bhAwBlBmEyzlOywulA5VrN6JkxWmXUv6JhSuKtNY%3D&reserved=0">https://gitlab.com/QEF/q-e/blob/master/upftools/how_to_fix_upf.md</a></i></div></div></div></blockquote><div> </div><div>This is a possible source of trouble if the output directory is not visible to all processors. Please try one of the following:</div><div>- do what it is suggested (or simply: edit Ga.pbe-dn-kjpaw_psl.1.0.0.UPF, replace all occurrences of "&" with "&")</div><div>- get version 6.6, that reads the pseudopotential file on one processor and broadcast its contents to all other processes</div><div>- get the development version, that in addition is not sensitive to the presence of nonstandard "&" in the files,</div><div> </div><div>Paolo</div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><br />-Brad</div><div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div id="m_7302475323095056452x_gmail-m_-4475069457157334453x_gmail-m_8263478242330334081Signature"><div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">--------------------------------------------------------</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Bradly Baer</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><span style="font-family:Calibri,Arial,Helvetica,sans-serif;background-color:rgb(255,255,255);display:inline">Graduate Research Assistant, Walker Lab</span></div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><span style="font-family:Calibri,Arial,Helvetica,sans-serif;background-color:rgb(255,255,255);display:inline">Interdisciplinary Materials Science</span></div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Vanderbilt University</div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> </div></div></div></div><div id="m_7302475323095056452x_gmail-m_-4475069457157334453x_gmail-m_8263478242330334081appendonsend"> </div><hr style="display:inline-block;width:98%" /><div id="m_7302475323095056452x_gmail-m_-4475069457157334453x_gmail-m_8263478242330334081divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>From:</b> users <<a target="_blank" href="mailto:users-bounces@lists.quantum-espresso.org">users-bounces@lists.quantum-espresso.org</a>> on behalf of Paolo Giannozzi <<a target="_blank" href="mailto:p.giannozzi@gmail.com">p.giannozzi@gmail.com</a>><br /><b>Sent:</b> Thursday, November 5, 2020 2:33 AM<br /><b>To:</b> Quantum ESPRESSO users Forum <<a target="_blank" href="mailto:users@lists.quantum-espresso.org">users@lists.quantum-espresso.org</a>><br /><b>Subject:</b> Re: [QE-users] Running efficiently on multiple nodes</font><div> </div></div><div><div dir="ltr"><div dir="ltr">On Wed, Nov 4, 2020 at 11:28 PM Baer, Bradly <<a target="_blank" href="mailto:bradly.b.baer@vanderbilt.edu">bradly.b.baer@vanderbilt.edu</a>> wrote:</div><div><div> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Now that I have two nodes, the script for a single node results in a crash shortly after reading in the pseudopotentials.</div></blockquote><div> </div><div>which version of QE are you using, and which crash do you obtain, with which executable?</div></div>Paolo<div>--<div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div>Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,<br />Univ. <a href="https://www.google.com/maps/search/Udine,+via+delle+Scienze+208,+33100+Udine,+Italy?entry=gmail&source=g">Udine, via delle Scienze 208, 33100 Udine, Italy</a><br />Phone +39-0432-558216, fax +39-0432-558222<br /> </div></div></div></div></div></div></div></div></div>_________________<br />Quantum ESPRESSO is supported by MaX (<a rel="noreferrer" target="_blank" href="https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.max-centre.eu%2F&data=04%7C01%7Cbradly.b.baer%40vanderbilt.edu%7Ca843f95dcbc04eb71ed508d881d5735b%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C637402101063309070%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=3WAEQsQAKgsnqkk%2FRpxTFQrgj0C1Fmm6ekNNZ2HkGyY%3D&reserved=0">www.max-centre.eu</a>)<br />users mailing list <a target="_blank" href="mailto:users@lists.quantum-espresso.org"> users@lists.quantum-espresso.org</a><br /><a rel="noreferrer" target="_blank" href="https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.quantum-espresso.org%2Fmailman%2Flistinfo%2Fusers&data=04%7C01%7Cbradly.b.baer%40vanderbilt.edu%7Ca843f95dcbc04eb71ed508d881d5735b%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C637402101063309070%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2BIu52jsZQs6b%2FY%2Fk11ZBc%2FxC0xr2c8aOlNDvbJLo5rE%3D&reserved=0">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div><br clear="all" /><br />--<div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div>Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,<br />Univ. <a href="https://www.google.com/maps/search/Udine,+via+delle+Scienze+208,+33100+Udine,+Italy?entry=gmail&source=g">Udine, via delle Scienze 208, 33100 Udine, Italy</a><br />Phone +39-0432-558216, fax +39-0432-558222<br /> </div></div></div></div></div></div></div></div>_______________________________________________<br />Quantum ESPRESSO is supported by MaX (<a rel="noreferrer" target="_blank" href="https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.max-centre.eu%2F&data=04%7C01%7Cbradly.b.baer%40vanderbilt.edu%7Ca843f95dcbc04eb71ed508d881d5735b%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C637402101063319066%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=YLzRslbTYz%2B8EObze6WnE6SKsrCIzJUeXvyHYvr7ZOU%3D&reserved=0">www.max-centre.eu</a>)<br />users mailing list <a target="_blank" href="mailto:users@lists.quantum-espresso.org"> users@lists.quantum-espresso.org</a><br /><a rel="noreferrer" target="_blank" href="https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.quantum-espresso.org%2Fmailman%2Flistinfo%2Fusers&data=04%7C01%7Cbradly.b.baer%40vanderbilt.edu%7Ca843f95dcbc04eb71ed508d881d5735b%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C637402101063319066%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=G2BtHxmpAJL4WDxv06ANzYYj4YGSJgOYqaEhE3GLNPg%3D&reserved=0">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div><br clear="all" /><br />--<div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div>Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,<br />Univ. <a href="https://www.google.com/maps/search/Udine,+via+delle+Scienze+208,+33100+Udine,+Italy?entry=gmail&source=g">Udine, via delle Scienze 208, 33100 Udine, Italy</a><br />Phone +39-0432-558216, fax +39-0432-558222<br /> </div></div></div></div></div></div></div>_______________________________________________<br />Quantum ESPRESSO is supported by MaX (<a rel="noreferrer" target="_blank" href="http://www.max-centre.eu">www.max-centre.eu</a>)<br />users mailing list <a target="_blank" href="mailto:users@lists.quantum-espresso.org">users@lists.quantum-espresso.org</a><br /><a rel="noreferrer" target="_blank" href="https://lists.quantum-espresso.org/mailman/listinfo/users">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div></div></blockquote><br /><br /><br /> </html>