[QE-users] hybrid pseudopotentials

Giuseppe Mattioli giuseppe.mattioli at ism.cnr.it
Wed Jul 17 17:20:47 CEST 2024


On the top of this, your calculation, with proper cutoffs, is actually  
a big one and will require significant resources. Moreover, if you  
want to save time&resources I always suggest to perform hybrid EXX  
calculations using norm-conserving pseudopotentials. They are much  
more stable in my experience.

Make also wise use of all the available tricks (read the pw manual)...

  &system
     ecutwfc=80.0,
     ecutfock=120.0, <---------
     input_dft='vdW-DF2-ahbr'
  /
  &electrons
     diagonalization='david',
     mixing_mode='plain'
     mixing_beta=0.1
     conv_thr=1.0d-7
     electron_maxstep=100
     adaptive_thr=.true. <---------
  /
ATOMIC_SPECIES
Zn    65.380     Zn_ONCV_PBE-1.2.upf
O     15.999     O_ONCV_PBE-1.2.UPF
N     14.007     N_ONCV_PBE-1.2.UPF
C     12.011     C_ONCV_PBE-1.2.UPF
H      1.008     H_ONCV_PBE-1.2.UPF

HTH
Giuseppe

Quoting Matic Poberznik <matic.poberznik at ijs.si>:

> Dear David,
>
> after a quick glance there are several problems with your  
> input/parallelization setup. Regarding the parallelization, you are  
> using 2 nodes, but only one core per node (effectively running on 2  
> cores), I would suggest to request the "cpus-per-task equal to the  
> number of cpu's on one node".
>
> On the input side, your ecutrho/ecutwfc cutoffs are way to small for  
> any meaningful result, and the scf convergence threshold  is way too  
> large (the default is 10^-6 and yours in 1.0?). In any case, I  
> suggest you carefully check that you understand the meaning of each  
> input parameter you specified:
>
> https://www.quantum-espresso.org/Doc/INPUT_PW.html#idm814
>
> Best,
>
> Matic Poberznik
>
> -- 
> Jozef Stefan Institute, Ljubljana, Slovenia
>
> On 7/17/24 16:32, dlduran at uco.es wrote:
>> Dear QE users:
>>
>>  I am working with a perovskite and I would like to obtain the  
>> Projected DOS (projwfc) with hybrid pseudopotentials. As far as I  
>> know, first a scf calculation must be done and afterwards the own  
>> projwfc calculation (scf with large number of kpoints, though).
>>
>>  I am still in the scf step because I'm having a lot of problems.  
>> Any simple calculation takes me a lot of time in the best of the  
>> cases, or the calculations directly crashes for paralellization  
>> reasons. I have tried hse and gau-pbe pseudopotentials.
>>
>>  This is part of my .in file:
>>
>>
>> &CONTROL
>>   calculation = 'scf',
>>   restart_mode = 'from_scratch',
>>   outdir = './tmp',
>>   pseudo_dir = '/home/dlduran/CALCULOS/PSEUDOPOTENCIALES/QE',
>>   prefix= 's2',
>>   etot_conv_thr=1e0,
>>   forc_conv_thr=1e0
>> /
>>
>> &SYSTEM
>>   ibrav = 14,
>>   celldm(1)=24.8964,
>>   celldm(2)=0.8654,
>>   celldm(3)=1.4874,
>>   celldm(4)=0.06589425,
>>   celldm(5)=0.279374894,
>>   celldm(6)=-0.160074146,
>>   nat = 140,
>>   ntyp = 6,
>>   ecutwfc = 10,
>>   ecutrho = 40,
>>   occupations = 'tetrahedra',
>>   input_dft= 'hse',
>>   nqx1 = 2, nqx2 = 2, nqx3 = 2,
>>   x_gamma_extrapolation = .true.,
>>   exxdiv_treatment = 'gygi-baldereschi'
>> /
>>
>> &ELECTRONS
>>   conv_thr = 1e0
>>
>> /
>>
>> ATOMIC_SPECIES
>>    C   12.01060  C.pbe-n-kjpaw_psl.1.0.0.UPF
>>    H    1.00750  H.pbe-kjpaw_psl.1.0.0.UPF
>>    S   32.06750  S.pbe-n-kjpaw_psl.1.0.0.UPF
>>    I  126.90400  I.pbe-n-kjpaw_psl.1.0.0.UPF
>>    N   14.00650  N.pbe-n-kjpaw_psl.1.0.0.UPF
>>   Pb  207.20000  Pb.pbe-dn-kjpaw_psl.1.0.0.UPF
>>
>> ATOMIC_POSITIONS (crystal)
>> Pb   0.742890000000000   0.764150000000000   0.970330000000000
>> Pb   0.813460000000000   0.227210000000000   0.976720000000000
>> Pb   0.257110000000000   0.235850000000000   0.029670000000000
>> Pb   0.186540000000000   0.772790000000000   0.023280000000000
>>
>> ...
>>
>>  H   0.744170000000000   0.472670000000000   0.818640000000000
>>  H   0.768130000000000   0.621830000000000   0.789940000000000
>>  H   0.647140000000000   0.560850000000000   0.847470000000000
>>  H   0.025900000000000   0.955450000000000   0.800600000000000
>>  H   0.134390000000000   0.949430000000000   0.831100000000000
>>  H   0.056530000000000   0.052470000000000   0.859210000000000
>>
>> K_POINTS (automatic)
>>
>>    2 2 2 0 0 0
>>
>>
>>  And this is my paralellization setup:
>>
>>
>> #SBATCH --nodes=2                 # Number of nodes
>> #SBATCH --ntasks=2                # Total number of MPI processes
>> #SBATCH --cpus-per-task=1         # Number of CPU (cores) per MPI process
>>
>>
>>  This calculation is running for 2 days!
>>
>>
>>  I would appreciate any help or suggestion. Do I have to try  
>> another easier functional? Any suitable set of kpoints and nqx  
>> parameters? Any other flag that I am missing or using wrongly?
>>
>>
>>  Thank you very much for your help, David.
>>
>>
>>
>> --------------------------
>> David López Durán
>> Department of Physics
>> University of Córdoba, Spain
>> Phone: +34 957 21 20 32
>> https://es.linkedin.com/in/davidlopezduran
>>
>> _______________________________________________
>> The Quantum ESPRESSO community stands by the Ukrainian
>> people and expresses its concerns about the devastating
>> effects that the Russian military offensive has on their
>> country and on the free and peaceful scientific, cultural,
>> and economic cooperation amongst peoples
>> _______________________________________________
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>> users mailing list users at lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
> _______________________________________________
> The Quantum ESPRESSO community stands by the Ukrainian
> people and expresses its concerns about the devastating
> effects that the Russian military offensive has on their
> country and on the free and peaceful scientific, cultural,
> and economic cooperation amongst peoples
> _______________________________________________
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users at lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users



GIUSEPPE MATTIOLI
CNR - ISTITUTO DI STRUTTURA DELLA MATERIA
Via Salaria Km 29,300 - C.P. 10
I-00015 - Monterotondo Scalo (RM)
Mob (*preferred*) +39 373 7305625
Tel + 39 06 90672342 - Fax +39 06 90672316
E-mail: <giuseppe.mattioli at ism.cnr.it>



More information about the users mailing list