[Pw_forum] Pw_forum Digest, Vol 120, Issue 27
surender at iitk.ac.in
surender at iitk.ac.in
Tue Aug 1 12:14:10 CEST 2017
> Send Pw_forum mailing list submissions to
> pw_forum at pwscf.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://pwscf.org/mailman/listinfo/pw_forum
> or, via email, send a message with subject or body 'help' to
> pw_forum-request at pwscf.org
>
> You can reach the person managing the list at
> pw_forum-owner at pwscf.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Pw_forum digest..."
>
>
> Today's Topics:
>
> 1. phonon accuracy and ecutrho for ultrasoft pseudopotential (balabi)
> 2. Re: Qe-6.1 giving different energy values on different PCs
> (Lorenzo Paulatto)
> 3. Re: Qe-6.1 giving different energy values on different PCs
> (Rajesh)
> 4. Re: Qe-6.1 giving different energy values on different PCs
> (Lorenzo Paulatto)
> 5. Re: Qe-6.1 giving different energy values on different PCs
> (Rajesh)
> 6. Re: Qe-6.1 giving different energy values on different PCs
> (Paolo Giannozzi)
> 7. Re: phonon accuracy and ecutrho for ultrasoft pseudopotential
> (Nicola Marzari)
> 8. shifting of zero point (Md. Masud Rana)
> 9. Phonon-calculation (surender at iitk.ac.in)
> 10. Re: Phonon-calculation (Paolo Giannozzi)
> 11. electron-phonon (Nadeem Natt)
> 12. Re: Qe-6.1 giving different energy values on different PCs
> (Paolo Giannozzi)
> 13. Re: Qe-6.1 giving different energy values on different PCs
> (Rajesh)
> 14. Re: Qe-6.1 giving different energy values on different PCs
> (Paolo Giannozzi)
> 15. B3LYP - (k+q) points (Alexandra Davila)
> 16. Re: Qe-6.1 giving different energy values on different PCs
> (Rajesh)
> 17. Re: B3LYP - (k+q) points (Lorenzo Paulatto)
> 18. spin-dependent electron-phonon coefficients
> (Ma?gorzata Wawrzyniak-Adamczewska)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 30 Jul 2017 23:03:33 +0800
> From: balabi <balabi at qq.com>
> Subject: [Pw_forum] phonon accuracy and ecutrho for ultrasoft
> pseudopotential
> To: pw_forum <pw_forum at pwscf.org>
> Message-ID: <8C98AAC5-F4AF-4320-A43B-6AF590DEC3C7 at qq.com>
> Content-Type: text/plain; charset="us-ascii"
>
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170730/5cd1fc99/attachment-0001.html
>
> ------------------------------
>
> Message: 2
> Date: Sun, 30 Jul 2017 18:03:17 +0200
> From: Lorenzo Paulatto <lorenzo.paulatto at impmc.upmc.fr>
> Subject: Re: [Pw_forum] Qe-6.1 giving different energy values on
> different PCs
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAG+GtJfgsiTWTKXWU5eFH-f4vSBVkAo0vuRrKke=3Beyppo10g at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear Rajesh,
> if you want to have a meaningful answer you need to provide some useful
> information. At the very least, the full output in all the different
> cases.
>
> Kind regards
>
> --
> Lorenzo Paulatto
> Written on a virtual keyboard with real fingers
>
> On Jul 30, 2017 05:11, "Rajesh" <creativeidleminds at gmail.com> wrote:
>
> Dear Users
> I ran same input script (vc-relax) on different PCs on different number of
> processors (24 and 70). But energy values I am getting are different. On
> 24
> cpus its higher than with 70 cpus. On 70 cpus number of cycles a
> simulation
> runs are higher than that of 24 cpus. At the end of output (24 cpus PC) I
> get Job done. Is it really the job is completed? WHy this is happening? Is
> the job finished prematurely?
>
>
> Thank you.
>
>
> Rajesh
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170730/cf7d9a4a/attachment-0001.html
>
> ------------------------------
>
> Message: 3
> Date: Sun, 30 Jul 2017 21:41:58 +0530
> From: Rajesh <creativeidleminds at gmail.com>
> Subject: Re: [Pw_forum] Qe-6.1 giving different energy values on
> different PCs
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAF52KGdy67cXQE-xZiSNVL1Vv70oidxVNiyxP=uxM-6cwZOaUQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear Dr Paulatto
> After careful evaluation, I found some message like "Maximum cpu time
> exceeded". Is it the reason for premature stopping of simulations? I have
> attached last part of my output
>
> iteration # 1 ecut= 30.00 Ry beta=0.70
> CG style diagonalization
> c_bands: 2 eigenvalues not converged
> c_bands: 4 eigenvalues not converged
> c_bands: 1 eigenvalues not converged
> c_bands: 4 eigenvalues not converged
> c_bands: 1 eigenvalues not converged
> c_bands: 4 eigenvalues not converged
> c_bands: 5 eigenvalues not converged
> c_bands: 5 eigenvalues not converged
> c_bands: 1 eigenvalues not converged
>
> Maximum CPU time exceeded
>
> max_seconds = 86400.00
> elapsed seconds = 86714.24
> Calculation stopped in k-point loop, point # 10
> Calculation stopped in scf loop at iteration # 0
>
> Writing output data file BNH3_STW.save
>
> init_run : 158.44s CPU 227.15s WALL ( 1 calls)
> electrons : 60209.16s CPU 84809.71s WALL ( 5 calls)
> update_pot : 52.88s CPU 57.17s WALL ( 4 calls)
> forces : 595.14s CPU 688.06s WALL ( 4 calls)
> stress : 826.83s CPU 919.93s WALL ( 4 calls)
>
> Called by init_run:
> wfcinit : 145.78s CPU 212.19s WALL ( 1 calls)
> wfcinit:wfcr : 144.95s CPU 211.35s WALL ( 20 calls)
> potinit : 1.35s CPU 2.59s WALL ( 1 calls)
>
> Called by electrons:
> c_bands : 58214.37s CPU 81913.52s WALL ( 36 calls)
> sum_band : 1897.42s CPU 2792.40s WALL ( 35 calls)
> v_of_rho : 25.12s CPU 41.02s WALL ( 40 calls)
> v_h : 2.46s CPU 4.36s WALL ( 40 calls)
> v_xc : 27.15s CPU 43.89s WALL ( 48 calls)
> newd : 77.36s CPU 88.65s WALL ( 40 calls)
> mix_rho : 4.63s CPU 7.61s WALL ( 35 calls)
>
> Called by c_bands:
> init_us_2 : 30.88s CPU 31.00s WALL ( 1590 calls)
> ccgdiagg : 51949.04s CPU 73007.34s WALL ( 916 calls)
> wfcrot : 6372.47s CPU 9043.56s WALL ( 846 calls)
>
> Called by sum_band:
> sum_band:bec : 1.40s CPU 1.40s WALL ( 700 calls)
> addusdens : 89.68s CPU 99.21s WALL ( 35 calls)
>
> Called by *cgdiagg:
> h_psi : 27157.32s CPU 38443.52s WALL ( 722577 calls)
> s_psi : 11265.84s CPU 11329.93s WALL ( 1444308 calls)
> cdiaghg : 39.41s CPU 46.63s WALL ( 846 calls)
>
> Called by h_psi:
> h_psi:pot : 27118.42s CPU 38404.08s WALL ( 722577 calls)
> h_psi:calbec : 9964.93s CPU 13225.79s WALL ( 722577 calls)
> vloc_psi : 11130.60s CPU 19108.08s WALL ( 722577 calls)
> add_vuspsi : 6018.79s CPU 6065.72s WALL ( 722577 calls)
> h_1psi : 28827.80s CPU 38539.59s WALL ( 721731 calls)
>
> General routines
> calbec : 19693.29s CPU 26378.43s WALL ( 1445408 calls)
> fft : 48.26s CPU 94.01s WALL ( 725 calls)
> ffts : 1.25s CPU 2.49s WALL ( 75 calls)
> fftw : 9871.72s CPU 18370.16s WALL ( 1790302 calls)
> interpolate : 5.62s CPU 10.36s WALL ( 75 calls)
> davcio : 0.00s CPU 0.22s WALL ( 20 calls)
>
> Parallel routines
> fft_scatter : 7695.82s CPU 16215.42s WALL ( 1791102 calls)
>
> PWSCF : 0d 17h10m CPU 1d 0h 5m WALL
>
>
> This run was terminated on: 21:35:26 28Jul2017
>
> =------------------------------------------------------------------------------=
> JOB DONE.
> =------------------------------------------------------------------------------=
> -------------------------------------------------------
> Primary job terminated normally, but 1 process returned
> a non-zero exit code.. Per user-direction, the job has been aborted.
> -------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun detected that one or more processes exited with non-zero status,
> thus causing
> the job to be terminated. The first process to do so was:
>
> Process name: [[10099,1],10]
> Exit code: 2
> --------------------------------------------------------------------------
>
>
> On Sun, Jul 30, 2017 at 9:33 PM, Lorenzo Paulatto <
> lorenzo.paulatto at impmc.upmc.fr> wrote:
>
>> Dear Rajesh,
>> if you want to have a meaningful answer you need to provide some useful
>> information. At the very least, the full output in all the different
>> cases.
>>
>> Kind regards
>>
>> --
>> Lorenzo Paulatto
>> Written on a virtual keyboard with real fingers
>>
>> On Jul 30, 2017 05:11, "Rajesh" <creativeidleminds at gmail.com> wrote:
>>
>> Dear Users
>> I ran same input script (vc-relax) on different PCs on different number
>> of
>> processors (24 and 70). But energy values I am getting are different. On
>> 24
>> cpus its higher than with 70 cpus. On 70 cpus number of cycles a
>> simulation
>> runs are higher than that of 24 cpus. At the end of output (24 cpus PC)
>> I
>> get Job done. Is it really the job is completed? WHy this is happening?
>> Is
>> the job finished prematurely?
>>
>>
>> Thank you.
>>
>>
>> Rajesh
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>>
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170730/91c28f6c/attachment-0001.html
>
> ------------------------------
>
> Message: 4
> Date: Sun, 30 Jul 2017 18:58:31 +0200
> From: Lorenzo Paulatto <lorenzo.paulatto at impmc.upmc.fr>
> Subject: Re: [Pw_forum] Qe-6.1 giving different energy values on
> different PCs
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAG+GtJc3QUX8ns728-8J5+rJQEzu_Vmb5Wyvngdf8Ks7UL+fkQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Jul 30, 2017 18:13, "Rajesh" <creativeidleminds at gmail.com> wrote:
>
> Dear Dr Paulatto
> After careful evaluation, I found some message like "Maximum cpu time
> exceeded". Is it the reason for premature stopping of simulations? I have
> attached last part of my output
>
>
>
> Yes,
> Kind regards
>
>
>
> iteration # 1 ecut= 30.00 Ry beta=0.70
> CG style diagonalization
> c_bands: 2 eigenvalues not converged
> c_bands: 4 eigenvalues not converged
> c_bands: 1 eigenvalues not converged
> c_bands: 4 eigenvalues not converged
> c_bands: 1 eigenvalues not converged
> c_bands: 4 eigenvalues not converged
> c_bands: 5 eigenvalues not converged
> c_bands: 5 eigenvalues not converged
> c_bands: 1 eigenvalues not converged
>
> Maximum CPU time exceeded
>
> max_seconds = 86400.00
> elapsed seconds = 86714.24
> Calculation stopped in k-point loop, point # 10
> Calculation stopped in scf loop at iteration # 0
>
> Writing output data file BNH3_STW.save
>
> init_run : 158.44s CPU 227.15s WALL ( 1 calls)
> electrons : 60209.16s CPU 84809.71s WALL ( 5 calls)
> update_pot : 52.88s CPU 57.17s WALL ( 4 calls)
> forces : 595.14s CPU 688.06s WALL ( 4 calls)
> stress : 826.83s CPU 919.93s WALL ( 4 calls)
>
> Called by init_run:
> wfcinit : 145.78s CPU 212.19s WALL ( 1 calls)
> wfcinit:wfcr : 144.95s CPU 211.35s WALL ( 20 calls)
> potinit : 1.35s CPU 2.59s WALL ( 1 calls)
>
> Called by electrons:
> c_bands : 58214.37s CPU 81913.52s WALL ( 36 calls)
> sum_band : 1897.42s CPU 2792.40s WALL ( 35 calls)
> v_of_rho : 25.12s CPU 41.02s WALL ( 40 calls)
> v_h : 2.46s CPU 4.36s WALL ( 40 calls)
> v_xc : 27.15s CPU 43.89s WALL ( 48 calls)
> newd : 77.36s CPU 88.65s WALL ( 40 calls)
> mix_rho : 4.63s CPU 7.61s WALL ( 35 calls)
>
> Called by c_bands:
> init_us_2 : 30.88s CPU 31.00s WALL ( 1590 calls)
> ccgdiagg : 51949.04s CPU 73007.34s WALL ( 916 calls)
> wfcrot : 6372.47s CPU 9043.56s WALL ( 846 calls)
>
> Called by sum_band:
> sum_band:bec : 1.40s CPU 1.40s WALL ( 700 calls)
> addusdens : 89.68s CPU 99.21s WALL ( 35 calls)
>
> Called by *cgdiagg:
> h_psi : 27157.32s CPU 38443.52s WALL ( 722577 calls)
> s_psi : 11265.84s CPU 11329.93s WALL ( 1444308 calls)
> cdiaghg : 39.41s CPU 46.63s WALL ( 846 calls)
>
> Called by h_psi:
> h_psi:pot : 27118.42s CPU 38404.08s WALL ( 722577 calls)
> h_psi:calbec : 9964.93s CPU 13225.79s WALL ( 722577 calls)
> vloc_psi : 11130.60s CPU 19108.08s WALL ( 722577 calls)
> add_vuspsi : 6018.79s CPU 6065.72s WALL ( 722577 calls)
> h_1psi : 28827.80s CPU 38539.59s WALL ( 721731 calls)
>
> General routines
> calbec : 19693.29s CPU 26378.43s WALL ( 1445408 calls)
> fft : 48.26s CPU 94.01s WALL ( 725 calls)
> ffts : 1.25s CPU 2.49s WALL ( 75 calls)
> fftw : 9871.72s CPU 18370.16s WALL ( 1790302 calls)
> interpolate : 5.62s CPU 10.36s WALL ( 75 calls)
> davcio : 0.00s CPU 0.22s WALL ( 20 calls)
>
> Parallel routines
> fft_scatter : 7695.82s CPU 16215.42s WALL ( 1791102 calls)
>
> PWSCF : 0d 17h10m CPU 1d 0h 5m WALL
>
>
> This run was terminated on: 21:35:26 28Jul2017
>
> =-----------------------------------------------------------
> -------------------=
> JOB DONE.
> =-----------------------------------------------------------
> -------------------=
> -------------------------------------------------------
> Primary job terminated normally, but 1 process returned
> a non-zero exit code.. Per user-direction, the job has been aborted.
> -------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun detected that one or more processes exited with non-zero status,
> thus causing
> the job to be terminated. The first process to do so was:
>
> Process name: [[10099,1],10]
> Exit code: 2
> --------------------------------------------------------------------------
>
>
> On Sun, Jul 30, 2017 at 9:33 PM, Lorenzo Paulatto <
> lorenzo.paulatto at impmc.upmc.fr> wrote:
>
>> Dear Rajesh,
>> if you want to have a meaningful answer you need to provide some useful
>> information. At the very least, the full output in all the different
>> cases.
>>
>> Kind regards
>>
>> --
>> Lorenzo Paulatto
>> Written on a virtual keyboard with real fingers
>>
>> On Jul 30, 2017 05:11, "Rajesh" <creativeidleminds at gmail.com> wrote:
>>
>> Dear Users
>> I ran same input script (vc-relax) on different PCs on different number
>> of
>> processors (24 and 70). But energy values I am getting are different. On
>> 24
>> cpus its higher than with 70 cpus. On 70 cpus number of cycles a
>> simulation
>> runs are higher than that of 24 cpus. At the end of output (24 cpus PC)
>> I
>> get Job done. Is it really the job is completed? WHy this is happening?
>> Is
>> the job finished prematurely?
>>
>>
>> Thank you.
>>
>>
>> Rajesh
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>>
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170730/cdc062e8/attachment-0001.html
>
> ------------------------------
>
> Message: 5
> Date: Sun, 30 Jul 2017 22:32:55 +0530
> From: Rajesh <creativeidleminds at gmail.com>
> Subject: Re: [Pw_forum] Qe-6.1 giving different energy values on
> different PCs
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAF52KGdHYDS5aK4r=h_2RzdD+cMW0U9Ev50u11YP9WF-jAqiVQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Why this is happening? Is it some memory shortage?
>
> On Jul 30, 2017 22:28, "Lorenzo Paulatto" <lorenzo.paulatto at impmc.upmc.fr>
> wrote:
>
>>
>>
>> On Jul 30, 2017 18:13, "Rajesh" <creativeidleminds at gmail.com> wrote:
>>
>> Dear Dr Paulatto
>> After careful evaluation, I found some message like "Maximum cpu time
>> exceeded". Is it the reason for premature stopping of simulations? I
>> have
>> attached last part of my output
>>
>>
>>
>> Yes,
>> Kind regards
>>
>>
>>
>> iteration # 1 ecut= 30.00 Ry beta=0.70
>> CG style diagonalization
>> c_bands: 2 eigenvalues not converged
>> c_bands: 4 eigenvalues not converged
>> c_bands: 1 eigenvalues not converged
>> c_bands: 4 eigenvalues not converged
>> c_bands: 1 eigenvalues not converged
>> c_bands: 4 eigenvalues not converged
>> c_bands: 5 eigenvalues not converged
>> c_bands: 5 eigenvalues not converged
>> c_bands: 1 eigenvalues not converged
>>
>> Maximum CPU time exceeded
>>
>> max_seconds = 86400.00
>> elapsed seconds = 86714.24
>> Calculation stopped in k-point loop, point # 10
>> Calculation stopped in scf loop at iteration # 0
>>
>> Writing output data file BNH3_STW.save
>>
>> init_run : 158.44s CPU 227.15s WALL ( 1 calls)
>> electrons : 60209.16s CPU 84809.71s WALL ( 5 calls)
>> update_pot : 52.88s CPU 57.17s WALL ( 4 calls)
>> forces : 595.14s CPU 688.06s WALL ( 4 calls)
>> stress : 826.83s CPU 919.93s WALL ( 4 calls)
>>
>> Called by init_run:
>> wfcinit : 145.78s CPU 212.19s WALL ( 1 calls)
>> wfcinit:wfcr : 144.95s CPU 211.35s WALL ( 20 calls)
>> potinit : 1.35s CPU 2.59s WALL ( 1 calls)
>>
>> Called by electrons:
>> c_bands : 58214.37s CPU 81913.52s WALL ( 36 calls)
>> sum_band : 1897.42s CPU 2792.40s WALL ( 35 calls)
>> v_of_rho : 25.12s CPU 41.02s WALL ( 40 calls)
>> v_h : 2.46s CPU 4.36s WALL ( 40 calls)
>> v_xc : 27.15s CPU 43.89s WALL ( 48 calls)
>> newd : 77.36s CPU 88.65s WALL ( 40 calls)
>> mix_rho : 4.63s CPU 7.61s WALL ( 35 calls)
>>
>> Called by c_bands:
>> init_us_2 : 30.88s CPU 31.00s WALL ( 1590 calls)
>> ccgdiagg : 51949.04s CPU 73007.34s WALL ( 916 calls)
>> wfcrot : 6372.47s CPU 9043.56s WALL ( 846 calls)
>>
>> Called by sum_band:
>> sum_band:bec : 1.40s CPU 1.40s WALL ( 700 calls)
>> addusdens : 89.68s CPU 99.21s WALL ( 35 calls)
>>
>> Called by *cgdiagg:
>> h_psi : 27157.32s CPU 38443.52s WALL ( 722577 calls)
>> s_psi : 11265.84s CPU 11329.93s WALL ( 1444308 calls)
>> cdiaghg : 39.41s CPU 46.63s WALL ( 846 calls)
>>
>> Called by h_psi:
>> h_psi:pot : 27118.42s CPU 38404.08s WALL ( 722577 calls)
>> h_psi:calbec : 9964.93s CPU 13225.79s WALL ( 722577 calls)
>> vloc_psi : 11130.60s CPU 19108.08s WALL ( 722577 calls)
>> add_vuspsi : 6018.79s CPU 6065.72s WALL ( 722577 calls)
>> h_1psi : 28827.80s CPU 38539.59s WALL ( 721731 calls)
>>
>> General routines
>> calbec : 19693.29s CPU 26378.43s WALL ( 1445408 calls)
>> fft : 48.26s CPU 94.01s WALL ( 725 calls)
>> ffts : 1.25s CPU 2.49s WALL ( 75 calls)
>> fftw : 9871.72s CPU 18370.16s WALL ( 1790302 calls)
>> interpolate : 5.62s CPU 10.36s WALL ( 75 calls)
>> davcio : 0.00s CPU 0.22s WALL ( 20 calls)
>>
>> Parallel routines
>> fft_scatter : 7695.82s CPU 16215.42s WALL ( 1791102 calls)
>>
>> PWSCF : 0d 17h10m CPU 1d 0h 5m WALL
>>
>>
>> This run was terminated on: 21:35:26 28Jul2017
>>
>> =-----------------------------------------------------------
>> -------------------=
>> JOB DONE.
>> =-----------------------------------------------------------
>> -------------------=
>> -------------------------------------------------------
>> Primary job terminated normally, but 1 process returned
>> a non-zero exit code.. Per user-direction, the job has been aborted.
>> -------------------------------------------------------
>> --------------------------------------------------------------------------
>> mpirun detected that one or more processes exited with non-zero status,
>> thus causing
>> the job to be terminated. The first process to do so was:
>>
>> Process name: [[10099,1],10]
>> Exit code: 2
>> --------------------------------------------------------------------------
>>
>>
>> On Sun, Jul 30, 2017 at 9:33 PM, Lorenzo Paulatto <
>> lorenzo.paulatto at impmc.upmc.fr> wrote:
>>
>>> Dear Rajesh,
>>> if you want to have a meaningful answer you need to provide some useful
>>> information. At the very least, the full output in all the different
>>> cases.
>>>
>>> Kind regards
>>>
>>> --
>>> Lorenzo Paulatto
>>> Written on a virtual keyboard with real fingers
>>>
>>> On Jul 30, 2017 05:11, "Rajesh" <creativeidleminds at gmail.com> wrote:
>>>
>>> Dear Users
>>> I ran same input script (vc-relax) on different PCs on different number
>>> of processors (24 and 70). But energy values I am getting are
>>> different. On
>>> 24 cpus its higher than with 70 cpus. On 70 cpus number of cycles a
>>> simulation runs are higher than that of 24 cpus. At the end of output
>>> (24
>>> cpus PC) I get Job done. Is it really the job is completed? WHy this is
>>> happening? Is the job finished prematurely?
>>>
>>>
>>> Thank you.
>>>
>>>
>>> Rajesh
>>>
>>> _______________________________________________
>>> Pw_forum mailing list
>>> Pw_forum at pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>>
>>>
>>> _______________________________________________
>>> Pw_forum mailing list
>>> Pw_forum at pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>>
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170730/409f1390/attachment-0001.html
>
> ------------------------------
>
> Message: 6
> Date: Sun, 30 Jul 2017 19:05:46 +0200
> From: Paolo Giannozzi <p.giannozzi at gmail.com>
> Subject: Re: [Pw_forum] Qe-6.1 giving different energy values on
> different PCs
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAPMgbCsZ_5FCDj1RPdxaQekpwZG=GSN4TjWJSzmia0FTL40CHg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Sun, Jul 30, 2017 at 7:02 PM, Rajesh <creativeidleminds at gmail.com>
> wrote:
>
> Why this is happening? Is it some memory shortage?
>>
>
> didn't you read (or didn't you understand) this?
>
>
>> if you want to have a meaningful answer you need to provide some useful
>>> information. At the very least, the full output in all the different
>>> cases.
>>>
>>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170730/9cae46ff/attachment-0001.html
>
> ------------------------------
>
> Message: 7
> Date: Sun, 30 Jul 2017 19:15:03 +0200
> From: Nicola Marzari <nicola.marzari at epfl.ch>
> Subject: Re: [Pw_forum] phonon accuracy and ecutrho for ultrasoft
> pseudopotential
> To: PWSCF Forum <pw_forum at pwscf.org>, balabi <balabi at qq.com>
> Message-ID: <75a391dd-c00e-e594-1b90-b16a2a60aaf7 at epfl.ch>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
>
> The energy is variational with respect to wfcs - so a first order error
> on wfcs becomes just a second-order error on the energy.
>
> For everything else you need tighter convergence on self-consistency.
>
> In addition, higher plane wave cutoffs can help make sure that
> e.g. the energy of the system doesn't change when translated,
> thus recovering the zero phonons at gamma.
>
> for the cutoffs, they are all studied and detailed in the SSSP page,
> mentioned many times.
>
> nicola
>
> On 30/07/2017 17:03, balabi wrote:
>> Dear developers,
>> I read this tutorial link
>> http://www.fisica.uniud.it/~giannozz/QE-Tutorial/tutorial_disp.html
>> It mentioned that if we use ultrasoft pseudopotential, we have to
>> use much higher ecutrho and tighter th2_ph
>> I have a few questions:
>> 1. What is the reason for a higher ecutrho as well as th2_ph? Any
>> references?
>> 2. Do we need a high ecutrho for PAW as well?
>> 2. I tried to calc diamond phonon dispersion and use (4 4 4) grid
>> with C.pz-rrkjus.UPF. But I found that the phonon dispersions for
>> ecutrho=150 and ecutrho=450 are exactly the same. Then I don't
>> understand why the tutorial suggest such a high ecutrho as 450? Are
>> there other criterions for a suitable ecutrho that I missed?
>> 3. For an arbitrary ultrasoft pseudopotential, how to quickly
>> determine a suitable ecutrho?
>>
>> best regards
>>
>>
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>
>
> --
> ----------------------------------------------------------------------
> Prof Nicola Marzari, Chair of Theory and Simulation of Materials, EPFL
> Director, National Centre for Competence in Research NCCR MARVEL, EPFL
> http://theossrv1.epfl.ch/Main/Contact http://nccr-marvel.ch/en/project
>
>
> ------------------------------
>
> Message: 8
> Date: Mon, 31 Jul 2017 09:59:37 +0600
> From: "Md. Masud Rana" <masud at eee.kuet.ac.bd>
> Subject: [Pw_forum] shifting of zero point
> To: PWSCF Forum <Pw_forum at pwscf.org>
> Message-ID:
> <CAA5YwXBnGffrXkGkjX6=BAR9f-E+eH_WLEL4mnzivHXKyP3-Mw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear user
> I performed some calculation of DOS for single layer graphene superell,
> but the zero point is slightly shifted to right than unit cell. so what
> should i do plz suggest
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170731/214f5af3/attachment-0001.html
>
> ------------------------------
>
> Message: 9
> Date: Mon, 31 Jul 2017 11:46:41 +0530
> From: surender at iitk.ac.in
> Subject: [Pw_forum] Phonon-calculation
> To: pw_forum at pwscf.org
> Message-ID:
> <77afa3870d569030978325f5b80c53a5.squirrel at webmail1.iitk.ac.in>
> Content-Type: text/plain;charset=UTF-8
>
> Dear users
>
> I am doing phonon calculation as a test for MgB2 system. I keep the same
> number of processor for scf and phonon calculation but calculation got
> CRASH and show this error
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> task # 34
> from phq_readin : error # 5010
> reading inputph namelist
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
> My input file is
>
> Electron-phonon coefficients
> &inputph
> tr2_ph=1.0d-14,
> prefix='MgB2',
> fildvscf='elph-MgB2-dv',
> amass(1)=1.008,
> amass(2)=6.940,
> fildyn='elph-MgB2.dyn',
> elph=.true.,
> trans=.true.,
> ldisp=.true.,
> nq1=4, nq2=4, nq3=4
> /
>
> could anybody please help me to rectify this error. thank you in advance.
>
>
> Surender
> IIT Kanpur
>
>
> ------------------------------
>
> Message: 10
> Date: Mon, 31 Jul 2017 08:25:03 +0200
> From: Paolo Giannozzi <p.giannozzi at gmail.com>
> Subject: Re: [Pw_forum] Phonon-calculation
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAPMgbCsq-3v6vQA61pBq6-Jq_Cum4HPv_wSvVAkBdpVJ9Yk8Zw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Mon, Jul 31, 2017 at 8:16 AM, <surender at iitk.ac.in> wrote:
>
> elph=.true.
>>
>
> obsolete syntax, replaced by variable 'electron_phonon'
>
>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170731/e727927a/attachment-0001.html
>
Thank you prof. Paolo Giannozzi
> ------------------------------
>
> Message: 11
> Date: Mon, 31 Jul 2017 15:31:42 +0900
> From: Nadeem Natt <nadeemnatt1 at gmail.com>
> Subject: [Pw_forum] electron-phonon
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CALbHWLcO08R0W3tQCxbc_GpS92H9JOpjBrxFAhwMEFfucLx79w at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi
> I am doing electron-phonon calculation for picene monoclinic crystal (72
> atoms). My calculation does not move forward after calculation of
> dynamical
> matrices for first q_point (0.0 0.0 0.0). It gets stuck at calculation of
> electron-phonon interaction.
>
> Apparently after calculation of dynamical matrices it should not take very
> long to calculate el-ph interaction but in my case it is not going even
> after 5*time_for_dynamical_matrices for one q-point. In my output my
> calculation does not move from this point.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> * freq (174 -174) = 2208.9 [cm-1] --> B_g R
> freq (175 -175) = 2210.2 [cm-1] --> A_g R freq
> (176 -176) = 2221.8 [cm-1] --> B_g R freq (177
> -177) = 2226.9 [cm-1] --> A_g R freq (178 -178)
> = 2246.6 [cm-1] --> B_u I freq (179 -179)
> = 2253.9 [cm-1] --> A_u I freq (180 -180)
> = 2259.6 [cm-1] --> A_g R freq (181 -181)
> = 2264.8 [cm-1] --> A_u I freq (182 -182)
> = 2271.1 [cm-1] --> B_u I freq (183 -183)
> = 2299.6 [cm-1] --> B_g R freq (184 -184)
> = 2317.7 [cm-1] --> A_u I freq (185 -185)
> = 2339.6 [cm-1] --> A_g R freq (186 -186)
> = 2346.2 [cm-1] --> B_g R freq (187 -187)
> = 2359.5 [cm-1] --> B_u I freq (188 -188)
> = 2378.8 [cm-1] --> A_u I freq (189 -189)
> = 2380.6 [cm-1] --> A_g R freq (190 -190)
> = 2496.3 [cm-1] --> B_u I freq (191 -191)
> = 2505.6 [cm-1] --> A_g R freq (192 -192)
> = 2696.7 [cm-1] --> A_g R electron-phonon
> interaction ...*
> Can you please elaborate some reason behind it? I am using 4x4x4 K-points
> and 2x2x2 q_points.
>
> Regards
>
>
> --
>
> *Muhammad Nadeem*
>
> *Graduate Student*
>
> *Department of Physics Sungkyunkwan University, Suwon*
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170731/57dcd526/attachment-0001.html
>
> ------------------------------
>
> Message: 12
> Date: Mon, 31 Jul 2017 09:27:38 +0200
> From: Paolo Giannozzi <p.giannozzi at gmail.com>
> Subject: Re: [Pw_forum] Qe-6.1 giving different energy values on
> different PCs
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAPMgbCsPTjrzmrErWyUhkHVDpJ65zaisJhAJHPvYFZyYDNJoqA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> You sent two different runs: the starting configuration for the job
> running
> on 70 CPUs was closer to convergence than the one on 24 CPUs. Of course
> the
> former performs more iterations than the latter in the same maximum
> allowed
> time.
>
> You shouldn't use 'cg' diagonalization unless you have a specific reason
> to.
>
> Paolo
>
> On Sun, Jul 30, 2017 at 7:05 PM, Paolo Giannozzi <p.giannozzi at gmail.com>
> wrote:
>
>> On Sun, Jul 30, 2017 at 7:02 PM, Rajesh <creativeidleminds at gmail.com>
>> wrote:
>>
>> Why this is happening? Is it some memory shortage?
>>>
>>
>> didn't you read (or didn't you understand) this?
>>
>>
>>> if you want to have a meaningful answer you need to provide some useful
>>>> information. At the very least, the full output in all the different
>>>> cases.
>>>>
>>>
>> --
>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>> Phone +39-0432-558216 <+39%200432%20558216>, fax +39-0432-558222
>> <+39%200432%20558222>
>>
>>
>
>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170731/fd80e2f2/attachment-0001.html
>
> ------------------------------
>
> Message: 13
> Date: Mon, 31 Jul 2017 13:01:45 +0530
> From: Rajesh <creativeidleminds at gmail.com>
> Subject: Re: [Pw_forum] Qe-6.1 giving different energy values on
> different PCs
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAF52KGcEeet=xfUHtAJfVURhvjPVm7EPjq4ttN04DhLognFnUg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear Prof Giannozzi
>
> How can I change the maximum allowed time so that my simulations do not
> face this issue?
>
> Thanks
> Rajesh
>
> On Mon, Jul 31, 2017 at 12:57 PM, Paolo Giannozzi <p.giannozzi at gmail.com>
> wrote:
>
>> You sent two different runs: the starting configuration for the job
>> running on 70 CPUs was closer to convergence than the one on 24 CPUs. Of
>> course the former performs more iterations than the latter in the same
>> maximum allowed time.
>>
>> You shouldn't use 'cg' diagonalization unless you have a specific reason
>> to.
>>
>> Paolo
>>
>> On Sun, Jul 30, 2017 at 7:05 PM, Paolo Giannozzi <p.giannozzi at gmail.com>
>> wrote:
>>
>>> On Sun, Jul 30, 2017 at 7:02 PM, Rajesh <creativeidleminds at gmail.com>
>>> wrote:
>>>
>>> Why this is happening? Is it some memory shortage?
>>>>
>>>
>>> didn't you read (or didn't you understand) this?
>>>
>>>
>>>> if you want to have a meaningful answer you need to provide some
>>>> useful
>>>>> information. At the very least, the full output in all the different
>>>>> cases.
>>>>>
>>>>
>>> --
>>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>>> Phone +39-0432-558216 <+39%200432%20558216>, fax +39-0432-558222
>>> <+39%200432%20558222>
>>>
>>>
>>
>>
>> --
>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>> Phone +39-0432-558216 <+39%200432%20558216>, fax +39-0432-558222
>> <+39%200432%20558222>
>>
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170731/90a311ac/attachment-0001.html
>
> ------------------------------
>
> Message: 14
> Date: Mon, 31 Jul 2017 09:42:04 +0200
> From: Paolo Giannozzi <p.giannozzi at gmail.com>
> Subject: Re: [Pw_forum] Qe-6.1 giving different energy values on
> different PCs
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAPMgbCsWz8berk4C3LqRwJZjT4dW7y0tDfo+3aA74=YZS0cKgg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> You set it, and don't know how to unset it? See variable "max_seconds".
>
> On Mon, Jul 31, 2017 at 9:31 AM, Rajesh <creativeidleminds at gmail.com>
> wrote:
>
>> Dear Prof Giannozzi
>>
>> How can I change the maximum allowed time so that my simulations do not
>> face this issue?
>>
>> Thanks
>> Rajesh
>>
>> On Mon, Jul 31, 2017 at 12:57 PM, Paolo Giannozzi
>> <p.giannozzi at gmail.com>
>> wrote:
>>
>>> You sent two different runs: the starting configuration for the job
>>> running on 70 CPUs was closer to convergence than the one on 24 CPUs.
>>> Of
>>> course the former performs more iterations than the latter in the same
>>> maximum allowed time.
>>>
>>> You shouldn't use 'cg' diagonalization unless you have a specific
>>> reason
>>> to.
>>>
>>> Paolo
>>>
>>> On Sun, Jul 30, 2017 at 7:05 PM, Paolo Giannozzi
>>> <p.giannozzi at gmail.com>
>>> wrote:
>>>
>>>> On Sun, Jul 30, 2017 at 7:02 PM, Rajesh <creativeidleminds at gmail.com>
>>>> wrote:
>>>>
>>>> Why this is happening? Is it some memory shortage?
>>>>>
>>>>
>>>> didn't you read (or didn't you understand) this?
>>>>
>>>>
>>>>> if you want to have a meaningful answer you need to provide some
>>>>> useful
>>>>>> information. At the very least, the full output in all the different
>>>>>> cases.
>>>>>>
>>>>>
>>>> --
>>>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>>>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>>>> Phone +39-0432-558216 <+39%200432%20558216>, fax +39-0432-558222
>>>> <+39%200432%20558222>
>>>>
>>>>
>>>
>>>
>>> --
>>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>>> Phone +39-0432-558216 <+39%200432%20558216>, fax +39-0432-558222
>>> <+39%200432%20558222>
>>>
>>>
>>> _______________________________________________
>>> Pw_forum mailing list
>>> Pw_forum at pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>
>
>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170731/8fddd7e3/attachment-0001.html
>
> ------------------------------
>
> Message: 15
> Date: Mon, 31 Jul 2017 09:48:32 +0200
> From: Alexandra Davila <davila at theo-physik.uni-kiel.de>
> Subject: [Pw_forum] B3LYP - (k+q) points
> To: pw_forum at pwscf.org
> Message-ID:
> <fa2b351e-9452-aa68-ef0c-ae871b53f852 at theo-physik.uni-kiel.de>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Dear QE users,
>
> I am trying to quantify the difference between PBE and B3LYP for Cl/Au
> adsorption energy.
> My system has 13 atoms, c(2x2) surface unit cell with 6 layers and 12
> x12 x1 kpoints ( 43 irreducible kpoints, I know, that it is too much).
> The pseudopotential is ultrasoft generated with BLYP functional.
>
> In my input file, I set the following flags:
>
> input_dft='B3LYP'
> nqx1=2, nqx2=2, nqx3=1
>
> I performed first a scf calculation without the flags above, then one
> with (which takes a lot of time and even now it is not ready).
>
> My question is about the (k+q) points, in my output it is written:
> EXX: setup a grid of 128 q-points centered on each k-point
> (k+q)-points:
>
> I would like to understand from where does the number come? As far as I
> understood, (k+q ) points are in the set of k points, why I have more?
> What can I do to reduce the cpu time?
>
> Thanks for your time and patience,
>
>
>
> ------------------------------
>
> Message: 16
> Date: Mon, 31 Jul 2017 13:19:53 +0530
> From: Rajesh <creativeidleminds at gmail.com>
> Subject: Re: [Pw_forum] Qe-6.1 giving different energy values on
> different PCs
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAF52KGcsRFq+uqBZQWB4W9SP0t-_1WbqDtN74q2ZRaryDodWUQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear Prof Giannozzi
>
> Oh. I think my gui software set it unknown to me.
> Thanks
> Rajesh
>
> On Mon, Jul 31, 2017 at 1:12 PM, Paolo Giannozzi <p.giannozzi at gmail.com>
> wrote:
>
>> You set it, and don't know how to unset it? See variable "max_seconds".
>>
>> On Mon, Jul 31, 2017 at 9:31 AM, Rajesh <creativeidleminds at gmail.com>
>> wrote:
>>
>>> Dear Prof Giannozzi
>>>
>>> How can I change the maximum allowed time so that my simulations do not
>>> face this issue?
>>>
>>> Thanks
>>> Rajesh
>>>
>>> On Mon, Jul 31, 2017 at 12:57 PM, Paolo Giannozzi
>>> <p.giannozzi at gmail.com>
>>> wrote:
>>>
>>>> You sent two different runs: the starting configuration for the job
>>>> running on 70 CPUs was closer to convergence than the one on 24 CPUs.
>>>> Of
>>>> course the former performs more iterations than the latter in the same
>>>> maximum allowed time.
>>>>
>>>> You shouldn't use 'cg' diagonalization unless you have a specific
>>>> reason
>>>> to.
>>>>
>>>> Paolo
>>>>
>>>> On Sun, Jul 30, 2017 at 7:05 PM, Paolo Giannozzi
>>>> <p.giannozzi at gmail.com>
>>>> wrote:
>>>>
>>>>> On Sun, Jul 30, 2017 at 7:02 PM, Rajesh <creativeidleminds at gmail.com>
>>>>> wrote:
>>>>>
>>>>> Why this is happening? Is it some memory shortage?
>>>>>>
>>>>>
>>>>> didn't you read (or didn't you understand) this?
>>>>>
>>>>>
>>>>>> if you want to have a meaningful answer you need to provide some
>>>>>>> useful information. At the very least, the full output in all the
>>>>>>> different
>>>>>>> cases.
>>>>>>>
>>>>>>
>>>>> --
>>>>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>>>>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>>>>> Phone +39-0432-558216 <+39%200432%20558216>, fax +39-0432-558222
>>>>> <+39%200432%20558222>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>>>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>>>> Phone +39-0432-558216 <+39%200432%20558216>, fax +39-0432-558222
>>>> <+39%200432%20558222>
>>>>
>>>>
>>>> _______________________________________________
>>>> Pw_forum mailing list
>>>> Pw_forum at pwscf.org
>>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>>
>>>
>>>
>>> _______________________________________________
>>> Pw_forum mailing list
>>> Pw_forum at pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>
>>
>>
>> --
>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>> Phone +39-0432-558216 <+39%200432%20558216>, fax +39-0432-558222
>> <+39%200432%20558222>
>>
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170731/4a6998e3/attachment-0001.html
>
> ------------------------------
>
> Message: 17
> Date: Mon, 31 Jul 2017 10:54:34 +0200
> From: Lorenzo Paulatto <lorenzo.paulatto at impmc.upmc.fr>
> Subject: Re: [Pw_forum] B3LYP - (k+q) points
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <CAG+GtJdFjn=jZ0VHSLD3-edkWveKm31HoPmbvfVUqU_+W5FwLg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
>>I would like to understand from where does the number come? As far as I
>
> understood, (k+q ) points are in the set of k points, why I have more?
>
>
> They are in the set of k points, *before* symmetry is used to reduce them.
>
>
> What can I do to reduce the cpu time?
>
>
> Try
> use_ace=.true.
> Although, it is the default in the current development version, it was
> already working in 6.1
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170731/484c9b8a/attachment-0001.html
>
> ------------------------------
>
> Message: 18
> Date: Mon, 31 Jul 2017 11:05:17 +0200
> From: Ma?gorzata Wawrzyniak-Adamczewska <mwaw at amu.edu.pl>
> Subject: [Pw_forum] spin-dependent electron-phonon coefficients
> To: pw_forum at pwscf.org
> Message-ID: <f82c65959e456acd35d1e0cb723a977b at amu.edu.pl>
> Content-Type: text/plain; charset="us-ascii"
>
> Dear QE users and developers,
>
> I would like to calculate spin-dependent electron-phonon lambda
> coefficients (for magnetic molecule).
>
> So far I succeeded to obtain the non-spin dependent lambdas (using
> QE-5-2-1), but this distribution fails to calculate the spin-dependent
> one.
>
> I used US-PAW pseudos.
>
> Is there any QE distribution with already implemented spin-dependent
> el-ph (or lambdas for systems with spin-orbit) ?
>
> Maybe I should use Norm-Cons pseudos for spin-dependent lambdas?
>
> Please, give a piece of advice.
>
> My best regards,
>
> Malgorzata Wawrzyniak-Adamczewska
>
> ZSECS Adam Mickiewicz University, Poznan, Poland
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20170731/a9d1108f/attachment-0001.html
>
> ------------------------------
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
> End of Pw_forum Digest, Vol 120, Issue 27
> *****************************************
>
>
More information about the users
mailing list