[Pw_forum] HSE Screening Parameter
Ilya Ryabinkin
igryabinkin at gmail.com
Fri Nov 6 15:02:01 CET 2015
Have you tried as recommended by your output file to activate option
'verbosity = high'?
I.
On Nov 6, 2015 8:41 AM, "Ref Fymz" <reffymz at gmail.com> wrote:
> Hey,
>
> Thanks for your reply. The strange thing is, when I use an 8x8x8 k-point
> grid, I do actually get all of the forces / energy contributions printed.
>
> I can give a sample output;
>
> 8x8x8 k-points
>
> “
> ! total energy = -15.76266360 Ry
> Harris-Foulkes estimate = -15.76266360 Ry
> est. exchange err (dexx) = 0.00000000 Ry
> - averaged Fock potential = 0.00002736 Ry
> + Fock energy = -0.00001368 Ry
>
> EXX self-consistency reached
>
> Forces acting on atoms (Ry/au):
>
> atom 1 type 1 force = 0.00000000 0.00000000 0.00000000
> atom 2 type 1 force = 0.00000000 0.00000000 0.00000000
>
> Total force = 0.000000 Total SCF correction = 0.000000
>
>
> entering subroutine stress ...
>
> total stress (Ry/bohr**3) (kbar) P=
> -0.02
> -0.00000012 0.00000000 0.00000000 -0.02 0.00 0.00
> 0.00000000 -0.00000012 0.00000000 0.00 -0.02 0.00
> 0.00000000 0.00000000 -0.00000012 0.00 0.00 -0.02
>
>
> Writing output data file silicon.save
>
> init_run : 2.09s CPU 2.94s WALL ( 1 calls)
> electrons : 4290.95s CPU 4299.01s WALL ( 2 calls)
> forces : 0.03s CPU 0.04s WALL ( 1 calls)
> stress : 509.33s CPU 510.68s WALL ( 1 calls)
>
> Called by init_run:
> wfcinit : 1.81s CPU 2.16s WALL ( 1 calls)
> potinit : 0.03s CPU 0.03s WALL ( 1 calls)
>
> Called by electrons:
> c_bands : 4288.98s CPU 4295.89s WALL ( 9 calls)
> sum_band : 1.34s CPU 1.34s WALL ( 9 calls)
> v_of_rho : 0.14s CPU 0.14s WALL ( 9 calls)
> mix_rho : 0.01s CPU 0.01s WALL ( 9 calls)
>
> Called by c_bands:
> init_us_2 : 0.09s CPU 0.11s WALL ( 696 calls)
> ccgdiagg : 4287.42s CPU 4294.25s WALL ( 261 calls)
> wfcrot : 3.18s CPU 3.28s WALL ( 174 calls)
>
> Called by sum_band:
>
> Called by *cgdiagg:
> h_psi : 4288.38s CPU 4295.25s WALL ( 5063 calls)
> cdiaghg : 0.03s CPU 0.06s WALL ( 174 calls)
>
> Called by h_psi:
> add_vuspsi : 0.17s CPU 0.16s WALL ( 5063 calls)
>
> General routines
> calbec : 1.34s CPU 1.42s WALL ( 10184 calls)
> fft : 0.08s CPU 0.09s WALL ( 106 calls)
> fftw : 8.89s CPU 8.93s WALL ( 16042 calls)
> fftc : 4199.36s CPU 4216.09s WALL ( 5644288 calls)
> fftcw : 1.72s CPU 1.84s WALL ( 3220 calls)
> davcio : 0.00s CPU 0.00s WALL ( 29 calls)
>
> Parallel routines
> fft_scatter : 2508.18s CPU 2519.17s WALL ( 5663656 calls)
>
> EXX routines
> exx_grid : 0.21s CPU 0.21s WALL ( 1 calls)
> exxinit : 15.51s CPU 15.72s WALL ( 2 calls)
> vexx : 4279.04s CPU 4285.76s WALL ( 1088 calls)
> exxenergy : 688.89s CPU 690.23s WALL ( 3 calls)
>
> PWSCF : 1h31m CPU 1h32m WALL
>
>
> This run was terminated on: 8:23:10 6Nov2015
>
> =-----------------------------------------------------------
> -------------------=
> JOB DONE.
> =-----------------------------------------------------------
> -------------------=
> “
>
>
>
>
> 24x24x24 k-points;
>
>
> “
>
> total cpu time spent up to now is 1197.5 secs
>
> total energy = -63.05180650 Ry
> Harris-Foulkes estimate = -63.05180650 Ry
> estimated scf accuracy < 0.00000001 Ry
>
> iteration # 7 ecut= 160.00 Ry beta=0.70
> CG style diagonalization
> ethr = 3.40E-11, avg # of iterations = 3.1
>
> total cpu time spent up to now is 1343.4 secs
>
> End of self-consistent calculation
>
> Number of k-points >= 100: set verbosity='high' to print the bands.
>
> -------------------------------------------------------
> Primary job terminated normally, but 1 process returned
> a non-zero exit code.. Per user-direction, the job has been aborted.
> -------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun detected that one or more processes exited with non-zero status,
> thus causing
> the job to be terminated. The first process to do so was:
>
> Process name: [[52590,1],6]
> Exit code: 127
> --------------------------------------------------------------------------
>
> “
>
> So the k-point density seems to mess me up. Is it due to the number of
> processors I'm using? I'm using 48.
>
>
> Thanks,
>
> Phil
>
> On 6 November 2015 at 12:17, Ref Fymz <reffymz at gmail.com> wrote:
>
>> Dear pw_forum,
>>
>>
>> I am trying to use the HSE hybrid functional within quantum espresso, I
>> have an 8 atom cubic silicon cell, and I have converged my k-point grid,
>> q-point grid, and my wavefunction ecut. However, regardless of how much I
>> change my screening parameter (from 0.0 bohr^-1 all the way to 100
>> bohr^-1), the difference between my lowest unoccupied and highest occupied
>> state remains about 0.61 eV, this points towards a problem. Can anybody
>> point me in the right direction?
>>
>> Also, when I do an scf run, I'm not getting the pressures / forces
>> printed in the file, despite asking for them, is this due to the density of
>> my k-point and q-point grid?
>>
>> The only other thing I can think it could be from is the ecutvcut /
>> x_gamma_extrapolation / exxdiv_treatment. Has anybody else used QE for
>> silicon successfully?
>>
>> My input looks like this;
>>
>> &control
>> prefix='silicon',
>> pseudo_dir = 'espresso/pseudo/',
>> outdir='./tmp'
>> tprnfor = .true.
>> tstress = .true.
>> restart_mode = 'from_scratch'
>>
>> /
>> &system
>> ibrav = 1, celldm(1) =10.327, nat= 8, ntyp= 1,
>> ecutwfc = 120, input_dft = 'hse'
>> nqx1 = 8, nqx2 = 8, nqx3 = 8
>> nbnd = 32
>> screening_parameter = 100
>> occupations = 'fixed'
>> /
>> &electrons
>> diagonalization='cg'
>> conv_thr = 1.0e-9
>> /
>> ATOMIC_SPECIES
>> Si 28.086 Si.pbe-mt_fhi.UPF
>>
>> ATOMIC_POSITIONS alat
>> Si 0.0 0.0 0.0
>> Si 0.5 0.5 0.0
>> Si 0.5 0.0 0.5
>> Si 0.0 0.5 0.5
>> Si 0.25 0.25 0.25
>> Si 0.75 0.75 0.25
>> Si 0.75 0.25 0.75
>> Si 0.25 0.75 0.75
>>
>> K_POINTS automatic
>> 24 24 24 0 0 0
>>
>>
>>
>> I would also like to add that when I add a F-D smearing temperature, my
>> output still only gives me the total energy (it seems to abort before
>> printing the breakdown of TS / XC / hartree / one electron / etc). Is this
>> also because of my dense kpoint grid? My output has this line a the end.
>>
>> "
>> Primary job terminated normally, but 1 process returned
>> a non-zero exit code.. Per user-direction, the job has been aborted.
>> -------------------------------------------------------
>> --------------------------------------------------------------------------
>> mpirun detected that one or more processes exited with non-zero status,
>> thus causing
>> the job to be terminated. The first process to do so was:
>>
>> Process name: [[44615,1],12]
>> Exit code: 127
>> "
>>
>> Thanks again, hope you can shed some light on this for me,
>>
>>
>>
>> Thanks,
>>
>>
>> Phil
>>
>>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20151106/bc4bd505/attachment.html>
More information about the users
mailing list