[QE-users] pp.x calculation very slow

Paolo Giannozzi p.giannozzi at gmail.com
Wed Mar 24 14:51:04 CET 2021


On Tue, Mar 23, 2021 at 10:28 PM Lenz Fiedler <fiedler.lenz at gmail.com>
wrote:

So if I understand you correctly, you did something like
>    mpirun -np 18 pp.x -in Fe.pp.ldos.in
> on the nscf data wirh 14x14x14 and got the 560 cube files in 30 minutes?
>

I computed only ONE cube file, I don't remember for which k-point grid. You
may want to make a few tests with a smaller k-point grid and for a single
energy. Then you can easily guess what will be the needed time for your
complete case. Anyway: if you need so many energies with a dense k-point
grid, it will take a lot of time no matter what.

Paolo

This is what I would expect/hope for as well. I will try this and if I
> cannot reproduce it there might be something wrong with my installation.
>
> Kind regards
> Lenz
>
>
> Am So., 21. März 2021 um 16:31 Uhr schrieb Paolo Giannozzi <
> p.giannozzi at gmail.com>:
>
>> A LDOS calculation should take more or less the same time as the
>> calculation of the charge density, plus reading, wroting, etc.. I just
>> tried and it took half an hour on 18 processors (CPU cores, not GPUs)
>>
>> Paolo
>>
>> On Wed, Mar 17, 2021 at 5:26 PM Lenz Fiedler <fiedler.lenz at gmail.com>
>> wrote:
>>
>>> Dear Professor Giannozzi,
>>>
>>> Thank you so much for your answer. You are right, I did not really think
>>> about the parallelization for the initial SCF calculation, I was more
>>> puzzled by the pp.x calculation. If I understand you correctly, using
>>> something like "pp.x -nk 16" might also help speed up the LDOS calculation
>>> as well? But there also might be an upper limit of how fast this performs
>>> and it could be that I simply have to wait a while to obtain the LDOS? As
>>> in, except for my suboptimal parallelization strategy, the long runtime of
>>> pp.x I observed was not due to wrong usage but simply to computational load
>>> of the problem?
>>>
>>> Kind regards
>>> Lenz Fiedler
>>>
>>> PhD Student (HZDR / CASUS)
>>>
>>>
>>> Am Di., 16. März 2021 um 16:36 Uhr schrieb Paolo Giannozzi <
>>> p.giannozzi at gmail.com>:
>>>
>>>> Your supercell is small but you have a high cutoff and a large number
>>>> of k-points, which makes the overall computational load quite heavy.
>>>> Moreover your choice of parallelization is less than optimal: you should
>>>> use both k-point and plane-wave parallelization, something like "mpirun -np
>>>> 160 pw.x -nk 16" or so. For reference: I ran your Fe.pw.scf.in input
>>>> in 20' on a small 36-core machine with two (powerful) GPU's. The nscf
>>>> calculation took 1h20' for the calculation and is taking as much time if
>>>> not more to write the files. Also note that not all postprocessing
>>>> calculations are optimized for large runs.
>>>>
>>>> Paolo
>>>>
>>>> On Mon, Mar 15, 2021 at 7:23 PM Lenz Fiedler <fiedler.lenz at gmail.com>
>>>> wrote:
>>>>
>>>>> (resent because something went wrong with the header line)
>>>>>
>>>>> Hi users,
>>>>>
>>>>> I am experiencing a problem with pp.x. I want to calculate the LDOS
>>>>> for a 16 Fe atom supercell using a 560 value energy grid and the 3D grid
>>>>> given by the DFT calculation (100x100x100 grid points). I have performed
>>>>> the DFT calculation by first doing a SCF calculation (8x8x8 k-points) and
>>>>> then a non-SCF calculation (14x14x14 k-points) successfully. Now I am
>>>>> trying to do:
>>>>>
>>>>> mpirun -np 160 pp.x -in Fe.pp.ldos.in
>>>>>
>>>>> but the calculation takes way longer than I anticipated. The entire
>>>>> DFT calculation took less than a day, while for the LDOS, after about 12
>>>>> hours, only 20 tmp files had been written. Am I doing something wrong? Or
>>>>> is this expected? At this rate, calculating the LDOS would take days, which
>>>>> is why I am assuming I am doing something wrong.
>>>>> Please find my output and input files here:
>>>>>
>>>>>
>>>>> https://drive.google.com/drive/folders/1R-m5jlw1bcNxe3nBXjc4dms-IoMMp4Ir?usp=sharing
>>>>> (the pp.out file is from a run where I investigated if a lower number
>>>>> of CPUs helps, but that was slower, as expected.)
>>>>>
>>>>> Kind regards
>>>>> Lenz
>>>>>
>>>>> PhD Student (HZDR / CASUS)
>>>>> _______________________________________________
>>>>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>>>>> users mailing list users at lists.quantum-espresso.org
>>>>> https://lists.quantum-espresso.org/mailman/listinfo/users
>>>>
>>>>
>>>>
>>>> --
>>>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>>>> Univ. Udine, via delle Scienze 206, 33100 Udine, Italy
>>>> Phone +39-0432-558216, fax +39-0432-558222
>>>>
>>>> _______________________________________________
>>>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>>>> users mailing list users at lists.quantum-espresso.org
>>>> https://lists.quantum-espresso.org/mailman/listinfo/users
>>>
>>> _______________________________________________
>>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>>> users mailing list users at lists.quantum-espresso.org
>>> https://lists.quantum-espresso.org/mailman/listinfo/users
>>
>>
>>
>> --
>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>> Univ. Udine, via delle Scienze 206, 33100 Udine, Italy
>> Phone +39-0432-558216, fax +39-0432-558222
>>
>> _______________________________________________
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>> users mailing list users at lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
> _______________________________________________
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users at lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users



-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 206, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20210324/c315d746/attachment.html>


More information about the users mailing list