[QE-users] ?= ?= Same run not accelerated when starting from converged rho and w

Pietro Davide Delugas pdelugas at sissa.it
Thu Aug 20 14:22:51 CEST 2020


sorry  I missed your answer.

maybe we are missing to load  the paw part of the density. It is still 
written in a file apart.

we should open an issue on gitlab, otherwise we will forget about it again

Pietro

On 8/18/20 10:45 AM, Antoine Jay wrote:
> Dear Pietro,
> This happens always not only in library mode.
> This does not happen with US pseudo.
> In attached files: 1rst step and 2nd step with us and paw (and a paw 
> with high diago accuracy for stephano).
> The 2nd step is a relax so you can see that the sequence
>         "Initial potential from superposition of free atoms
>     + "a scf correction to at. rho is read from "rho.in"   (rho.in is 
> obtained from output_drho of the 1rst job so it's scf-at.pot)
>     + "Starting wfcs from file"
> is not equivalent (in the PAW case) to the potential extrapolation
> " NEW-OLD atomic charge density approx. for the potential"
>
> Regards,
>
> Antoine Jay
> LAAS-CNRS
> Toulouse, France
>
>
> Le Lundi, Août 17, 2020 09:36 CEST, Pietro Delugas <pdelugas at sissa.it> 
> a écrit:
>
> Hi
>
> It’s strange.  Have you checked whether this happens only in library 
> mode or always ? What about if you use uspp instead of paw ?
>
> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for 
> Windows 10
>
> *From: *Antoine Jay <mailto:ajay at laas.fr>
> *Sent: *Monday, August 17, 2020 9:16 AM
> *To: *Quantum ESPRESSO users Forum 
> <mailto:users at lists.quantum-espresso.org>
> *Subject: *Re: [QE-users]?==?utf-8?q? ?==?utf-8?q? ?= Same run not 
> accelerated when starting from converged rho and wf
>
> Dear Stephano,
> I reduced diago_thr_init to 1.0D-13 but it changed nothing to the 
> first estimated scf accuracy.:
>
>    Initial potential from superposition of free atoms
>      a scf correction to at. rho is read 
> from./calcforces_QE/step0000/initialization/initial_bassin/relax//results/dbed.relax.save/rho.in
>      negative rho (up, down):  1.432E-02 0.000E+00
>      Starting wfcs from file
>      Checking if some PAW data can be deallocated...
>        PAW data deallocated on    4 nodes for type:  1
>        PAW data deallocated on   32 nodes for type:  2
>        PAW data deallocated on   35 nodes for type:  3
>      total cpu time spent up to now is       29.8 secs
>      Self-consistent Calculation
>      iteration #  1     ecut=    58.00 Ry     beta= 0.10
>      Davidson diagonalization with overlap
>      c_bands:  5 eigenvalues not converged
>      ethr =  1.00E-13,  avg # of iterations = 40.0
>      negative rho (up, down):  1.432E-02 0.000E+00
>      total cpu time spent up to now is      813.3 secs
>      total energy              =  -11390.75602693 Ry
>      Harris-Foulkes estimate   =  -11390.75013390 Ry
>      estimated scf accuracy    <       0.00021610 Ry
>      iteration #  2     ecut=    58.00 Ry     beta= 0.10
>      Davidson diagonalization with overlap
>      ethr =  2.16E-08,  avg # of iterations =  1.0
> ... and the rest of the file is the same as the qe.0002.out I send.
>
> Please note that I start from the wfcs of exactely the same 
> configuration and parameters by using startingwfc='file',
> but I do not use startingpot='file'.
> Instead I have activated input_drho='rho.in'
> This 'rho.in' file is a copy of the file 'rho.out' that have been 
> created by activating the parameter input_drho='rho.out'.
> This trick is done because sometimes, the atomic positions have a 
> small variation (but not in this case).
>
> Please note that I already tried this trick on smaller systems plenty 
> of times and I never saw this problem,
> this is why I wonder if it is a problem of the system size or of the 
> PAW pseudo for wich for example the becsum couldbe bad performed when 
> some PAW data are deallocated:
> " Checking if some PAW data can be deallocated...
>        PAW data deallocated on    4 nodes for type:  1
>        PAW data deallocated on   32 nodes for type:  2
>        PAW data deallocated on   35 nodes for type:  3"
>
>
> Regards,
>
> Antoine Jay
> LAAS-CNRS
> Toulouse, France
>
>
>
>
>
>
>
>
> Le Dimanche, Août 16, 2020 23:20 CEST, Stefano de Gironcoli 
> <degironc at sissa.it> a écrit:
>
> 6.d-9 is still too large.. it should be something like 1d-13 to aim at 
> a smaller scf estimate.
>
> are you really starting from the scf charge and wfcs of the same 
> conficuration ?
>
> stefano
>
> On 16/08/20 22:05, Antoine Jay wrote:
>
>     Dear Stephano,
>     adding diago_thr_int=1.0D-8 does not change the first conv_thr
>     (exept the average#of iterations)
>     As you said, the first value 1.0D-2 is detected to  be too large
>     and is updated to 6.0D-9 so I don't see why changing manually the
>     first value would change something if it is already automatically
>     changed...
>
>     Antoine Jay
>     LAAS-CNRS
>     Toulouse, France
>
>     Le Samedi, Août 15, 2020 17:10 CEST, Stefano de Gironcoli
>     <degironc at sissa.it> <mailto:degironc at sissa.it> a écrit:
>
>     Hi Antoine,
>
>       don't know exactly why you get this result but one thing you can
>     try is to set diag_thr_init ~ conv_thr/Nelec/10 so the first
>     diagonalization is pushed tighter (if the wfcs are already very
>     good it should not take too many iterations) and the computed dr2
>     estimate should be more faithful
>
>       now diag_thr_int is 1.d-2 then updated to 6.e-9 which is
>     consistent with conv_thr ~6.d-5...
>
>       idk. you can try
>
>       stefano
>
>     On 14/08/20 17:09, Antoine Jay wrote:
>
>         Dear all,
>
>         I'am doing two consecutive scf calculations with exactly the
>         same structure and parameters by calling qe6.5 as a library
>         (attached output files).
>         For the second call, I use the options:
>         startingwfc='file' and input_rho ='rho.in'
>         where these inputs are the converged wfc1.dat and
>         charge-density.dat of the first step.
>         Here I face two problems:
>
>         -I expected that the initial scf accuracy is 10^-11 as
>         obtained at the end of the first step, but it is only 10^-4.
>         How is it possible to explain such a decrease? I generally
>         loose only 2 orders of magnitude by doing this.
>
>         -Even with less scf iterations, the cpu time is greater.
>         Is it possible that some extra memory is allocated by qe when
>         input rho and wfc are asked, and not desallocated?
>
>         Note that until now, I have these troubles only when I use paw
>         pseudopotentials on big systems.
>
>         Regards,
>
>         Antoine Jay
>         LAAS-CNRS
>         Toulouse France
>
>
>
>
>
>         _______________________________________________
>
>         Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso  <http://www.max-centre.eu/quantum-espresso>)
>
>         users mailing listusers at lists.quantum-espresso.org  <mailto:users at lists.quantum-espresso.org>
>
>         https://lists.quantum-espresso.org/mailman/listinfo/users
>
>
>
>
>     _______________________________________________
>
>     Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso  <http://www.max-centre.eu/quantum-espresso>)
>
>     users mailing listusers at lists.quantum-espresso.org  <mailto:users at lists.quantum-espresso.org>
>
>     https://lists.quantum-espresso.org/mailman/listinfo/users
>
>
>
>
>
>
>
>
> _______________________________________________
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users at lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20200820/c1eb46d9/attachment.html>


More information about the users mailing list