[Pw_forum] Using -nimage with phonon at q=0

Merlin Meheut merlin.meheut at get.obs-mip.fr
Wed Aug 5 17:18:46 CEST 2015


Dear PWSCF users,

I recently discovered with great interest the possibilities to 
parallelize phonon calculations using the -nimage option of ph.x. 
(example given in espresso-4.3.2/examples/GRID_examples).

However, I had a problem when performing calculations at gamma-point: 
for other q-points (therefore with epsil=.false. and zue=.false.) 
everything went as planned, but with q=0 (and epsil=.true. and 
zue=.true.), this just did not work. I took 80 processors divided into 4 
images, and instead of dividing the different representations into 4 
pools, the four groups of processors realized the same calculation, 
computing the same representations. I killed the calculation at some 
point (I have computed the electric fields, effective charges and 218 
representations out of 564). I would like now to finish the computation 
without redoing it, and I have several questions to achieve this goal:

- is there a particular procedure for using -nimage with epsil=.true. 
and zue=.true., or is that just not foreseen? In other word, did I miss 
something?
- following the same idea, if I want to build my dynamical matrix, with 
effective charges and dielectric tensor, by a ph.x run with 
"recover=.true.", can I do it and if I can, what files do I need in 
_ph0? In particular, what are the files that contain the information on 
dielectric tensor and effective charges? In other words, are there 
special guidelines in supplement to the ones given in 
espresso-4.3.2/Doc/INPUT_PH.txt to separate  the phonon calculation in 
several jobs, when we consider a calculation with epsil=.true. and zue 
=.true. ?
(::::  ADDITIONAL INFORMATION lines 562-end)

Thank you in advance for any help,

the version of qe is 5.1

I did first a scf calculation on 20 processors:

-scf input file ----------------------------------------------------------
  &control
        calculation = 'scf',
       restart_mode = 'from_scratch' ,
             prefix = 'LiClMag2-1',
            disk_io = 'default' ,
     pseudo_dir     = '$WORKDIR',
     outdir         = '${WORKDIR}',
     tprnfor        = .true.,
     tstress        = .true.,
/&end
  &system
     ibrav = 0, celldm(1)=23.3535,
     nat = 188, ntyp = 4, ecutwfc = ${a}.0, ecutrho=${b}.0
/&end
  &electrons
    electron_maxstep = 100,
           conv_thr = 1.d-8,
        mixing_mode = 'plain',
        startingwfc = 'atomic',
        mixing_beta = 0.5,
/&end
ATOMIC_SPECIES
   Li    7.0160   Li.blyp-sl-rrkjus_psl.1.0.0.UPF
   O    15.9949   O.blyp.UPF
   H     1.0079   H.blyp2.UPF
   Cl   34.9689   Cl.blyp-nl-rrkjus_psl.1.0.0.UPF

ATOMIC_POSITIONS (angstrom)
(...)
K_POINTS {crystal}
  1
  0.0 0.0 0.0 1

CELL_PARAMETERS { cubic }
   1.000000000   0.000000000    0.00000000
   0.000000000   1.000000000    0.00000000
   0.000000000   0.000000000    1.00000000
------------------------------------------------------------------------

the scf was run on 20 processors

srun  pw.x -npool 1 < scf.${PREFIX}.inp > scf.${PREFIX}.out

the ph input is :
--------------------------------------------------------------------------------------
  &inputph
    amass(1)= 7.0160,
    amass(2)=15.9949,
    amass(3)= 1.0079,
    amass(4)=34.9689,
    ! ldisp=.true., nq1=2, nq2=2, nq3=2,
    alpha_mix(1) = 0.7,
    tr2_ph =  1.0D-18,
    prefix='LiClMag2-1',
    fildyn='mat.$PREFIX',
    epsil =.true.,
    trans =.true.,
    zue = .true.,
    lraman=.false.,
    outdir = '$WORKDIR',
    ! max_seconds= 180000,
/&end
      0.0000000   0.0000000   0.0000000
---------------------------------------------------------------------------------------

It was run on 20 processors using:

srun ph.x -npool 1 -nimage 4 <  ph.${PREFIX}.inp > ph.${PREFIX}.out


-- 
Merlin Méheut, Géosciences et Environnement Toulouse,
OMP, 14 avenue Edouard Belin, 31400 Toulouse, France
Université Paul Sabatier - Toulouse 3

  phone +33 (0)5 61 33 26 17, fax +33 (0)5 61 33 25 60




More information about the users mailing list