<div dir="ltr"><div dir="ltr">On Tue, Mar 23, 2021 at 10:28 PM Lenz Fiedler <<a href="mailto:fiedler.lenz@gmail.com">fiedler.lenz@gmail.com</a>> wrote:</div><div dir="ltr"><br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">So if I understand you correctly, you did something like<br><div> mpirun -np 18 pp.x -in <a href="http://Fe.pp.ldos.in" target="_blank">Fe.pp.ldos.in</a></div><div>on the nscf data wirh 14x14x14 and got the 560 cube files in 30 minutes? </div></div></blockquote><div><br></div><div>I computed only ONE cube file, I don't remember for which k-point grid. You may want to make a few tests with a smaller k-point grid and for a single energy. Then you can easily guess what will be the needed time for your complete case. Anyway: if you need so many energies with a dense k-point grid, it will take a lot of time no matter what.</div><div><br></div><div>Paolo<br></div><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>This is what I would expect/hope for as well. I will try this and if I cannot reproduce it there might be something wrong with my installation. <br></div><div><br></div><div>Kind regards</div><div>Lenz <br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Am So., 21. März 2021 um 16:31 Uhr schrieb Paolo Giannozzi <<a href="mailto:p.giannozzi@gmail.com" target="_blank">p.giannozzi@gmail.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>A LDOS calculation should take more or less the same time as the calculation of the charge density, plus reading, wroting, etc.. I just tried and it took half an hour on 18 processors (CPU cores, not GPUs)</div><div><br></div><div>Paolo<br> </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 17, 2021 at 5:26 PM Lenz Fiedler <<a href="mailto:fiedler.lenz@gmail.com" target="_blank">fiedler.lenz@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Dear Professor Giannozzi,</div><div><br></div><div>Thank you so much for your answer. You are right, I did not really think about the parallelization for the initial SCF calculation, I was more puzzled by the pp.x calculation. If I understand you correctly, using something like "pp.x -nk 16" might also help speed up the LDOS calculation as well? But there also might be an upper limit of how fast this performs and it could be that I simply have to wait a while to obtain the LDOS? As in, except for my suboptimal parallelization strategy, the long runtime of pp.x I observed was not due to wrong usage but simply to computational load of the problem?<br></div><div><br></div><div>Kind regards</div><div>Lenz Fiedler<br></div><div><br></div><div>PhD Student (HZDR / CASUS)</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Am Di., 16. März 2021 um 16:36 Uhr schrieb Paolo Giannozzi <<a href="mailto:p.giannozzi@gmail.com" target="_blank">p.giannozzi@gmail.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Your supercell is small but you have a high cutoff and a large number of k-points, which makes the overall computational load quite heavy. Moreover your choice of parallelization is less than optimal: you should use both k-point and plane-wave parallelization, something like "mpirun -np 160 pw.x -nk 16" or so. For reference: I ran your <a href="http://Fe.pw.scf.in" target="_blank">Fe.pw.scf.in</a> input in 20' on a small 36-core machine with two (powerful) GPU's. The nscf calculation took 1h20' for the calculation and is taking as much time if not more to write the files. Also note that not all postprocessing calculations are optimized for large runs.</div><div><br></div><div>Paolo<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 15, 2021 at 7:23 PM Lenz Fiedler <<a href="mailto:fiedler.lenz@gmail.com" target="_blank">fiedler.lenz@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">(resent because something went wrong with the header line)<br><div><br>Hi users,<br><br>I am experiencing a problem with pp.x. I want to calculate the LDOS for a 16 Fe atom supercell using a 560 value energy grid and the 3D grid given by the DFT calculation (100x100x100 grid points). I have performed the DFT calculation by first doing a SCF calculation (8x8x8 k-points) and then a non-SCF calculation (14x14x14 k-points) successfully. Now I am trying to do: <br><br>mpirun -np 160 pp.x -in <a href="http://Fe.pp.ldos.in" target="_blank">Fe.pp.ldos.in</a><br><br>but the calculation takes way longer than I anticipated. The entire DFT calculation took less than a day, while for the LDOS, after about 12 hours, only 20 tmp files had been written. Am I doing something wrong? Or is this expected? At this rate, calculating the LDOS would take days, which is why I am assuming I am doing something wrong. <br>Please find my output and input files here:<br><br><a href="https://drive.google.com/drive/folders/1R-m5jlw1bcNxe3nBXjc4dms-IoMMp4Ir?usp=sharing" target="_blank">https://drive.google.com/drive/folders/1R-m5jlw1bcNxe3nBXjc4dms-IoMMp4Ir?usp=sharing</a><br>(the pp.out file is from a run where I investigated if a lower number of CPUs helps, but that was slower, as expected.)<br><br>Kind regards<br>Lenz<br><br>PhD Student (HZDR / CASUS)</div></div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div><br clear="all"><br>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div>Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,<br>Univ. Udine, via delle Scienze 206, 33100 Udine, Italy<br>Phone +39-0432-558216, fax +39-0432-558222<br><br></div></div></div></div></div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div><br clear="all"><br>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div>Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,<br>Univ. Udine, via delle Scienze 206, 33100 Udine, Italy<br>Phone +39-0432-558216, fax +39-0432-558222<br><br></div></div></div></div></div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,<br>Univ. Udine, via delle Scienze 206, 33100 Udine, Italy<br>Phone +39-0432-558216, fax +39-0432-558222<br><br></div></div></div></div></div></div>