[Pw_forum] Using -nimage with phonon at q=0

Andrea Dal Corso dalcorso at sissa.it
Wed Aug 5 18:50:29 CEST 2015


On Wed, 2015-08-05 at 17:18 +0200, Merlin Meheut wrote:
> Dear PWSCF users,
> 
> I recently discovered with great interest the possibilities to 
> parallelize phonon calculations using the -nimage option of ph.x. 
> (example given in espresso-4.3.2/examples/GRID_examples).
> 
GRID examples refer to grid splitting, meaning that you split the
calculation using start_irr, last_irr etc. and in principle the
calculation can run in different machines. Then you have to collect the
files  yourself to finally obtain the results.

Image splitting with the -nimage option is in Image_example. In this
case the calculation is split automatically, but at the end you need to
do another calculation without images to collect the results. 
Please use QE 5.2 if you want to use these features.


> However, I had a problem when performing calculations at gamma-point: 
> for other q-points (therefore with epsil=.false. and zue=.false.) 
> everything went as planned, but with q=0 (and epsil=.true. and 
> zue=.true.), this just did not work. I took 80 processors divided into 4 
> images, and instead of dividing the different representations into 4 
> pools, the four groups of processors realized the same calculation, 
> computing the same representations. I killed the calculation at some 
> point (I have computed the electric fields, effective charges and 218 
> representations out of 564). I would like now to finish the computation 
> without redoing it, and I have several questions to achieve this goal:
> 

Are you sure that all the images made the same phonon calculations, or
all images made the electric field calculation but the phonon modes were
different? In this case, since you stopped the calculation, you can only
collect the .xml files in a single _ph0 directory and restart without
images. The restart with images is still poorly supported.

> - is there a particular procedure for using -nimage with epsil=.true. 
> and zue=.true., or is that just not foreseen? In other word, did I miss 
> something?
> - following the same idea, if I want to build my dynamical matrix, with 
> effective charges and dielectric tensor, by a ph.x run with 
> "recover=.true.", can I do it and if I can, what files do I need in 
> _ph0? In particular, what are the files that contain the information on 
> dielectric tensor and effective charges? In other words, are there 
> special guidelines in supplement to the ones given in 
> espresso-4.3.2/Doc/INPUT_PH.txt to separate  the phonon calculation in 
> several jobs, when we consider a calculation with epsil=.true. and zue 
> =.true. ?

In the last version of QE, the dielectric constant and effective charges
are saved in the tensors.xml file in the _ph0 directory. 

HTH,

Andrea


> (::::  ADDITIONAL INFORMATION lines 562-end)
> 
> Thank you in advance for any help,
> 
> the version of qe is 5.1
> 
> I did first a scf calculation on 20 processors:
> 
> -scf input file ----------------------------------------------------------
>   &control
>         calculation = 'scf',
>        restart_mode = 'from_scratch' ,
>              prefix = 'LiClMag2-1',
>             disk_io = 'default' ,
>      pseudo_dir     = '$WORKDIR',
>      outdir         = '${WORKDIR}',
>      tprnfor        = .true.,
>      tstress        = .true.,
> /&end
>   &system
>      ibrav = 0, celldm(1)=23.3535,
>      nat = 188, ntyp = 4, ecutwfc = ${a}.0, ecutrho=${b}.0
> /&end
>   &electrons
>     electron_maxstep = 100,
>            conv_thr = 1.d-8,
>         mixing_mode = 'plain',
>         startingwfc = 'atomic',
>         mixing_beta = 0.5,
> /&end
> ATOMIC_SPECIES
>    Li    7.0160   Li.blyp-sl-rrkjus_psl.1.0.0.UPF
>    O    15.9949   O.blyp.UPF
>    H     1.0079   H.blyp2.UPF
>    Cl   34.9689   Cl.blyp-nl-rrkjus_psl.1.0.0.UPF
> 
> ATOMIC_POSITIONS (angstrom)
> (...)
> K_POINTS {crystal}
>   1
>   0.0 0.0 0.0 1
> 
> CELL_PARAMETERS { cubic }
>    1.000000000   0.000000000    0.00000000
>    0.000000000   1.000000000    0.00000000
>    0.000000000   0.000000000    1.00000000
> ------------------------------------------------------------------------
> 
> the scf was run on 20 processors
> 
> srun  pw.x -npool 1 < scf.${PREFIX}.inp > scf.${PREFIX}.out
> 
> the ph input is :
> --------------------------------------------------------------------------------------
>   &inputph
>     amass(1)= 7.0160,
>     amass(2)=15.9949,
>     amass(3)= 1.0079,
>     amass(4)=34.9689,
>     ! ldisp=.true., nq1=2, nq2=2, nq3=2,
>     alpha_mix(1) = 0.7,
>     tr2_ph =  1.0D-18,
>     prefix='LiClMag2-1',
>     fildyn='mat.$PREFIX',
>     epsil =.true.,
>     trans =.true.,
>     zue = .true.,
>     lraman=.false.,
>     outdir = '$WORKDIR',
>     ! max_seconds= 180000,
> /&end
>       0.0000000   0.0000000   0.0000000
> ---------------------------------------------------------------------------------------
> 
> It was run on 20 processors using:
> 
> srun ph.x -npool 1 -nimage 4 <  ph.${PREFIX}.inp > ph.${PREFIX}.out
> 
> 
> -- 
> Merlin Méheut, Géosciences et Environnement Toulouse,
> OMP, 14 avenue Edouard Belin, 31400 Toulouse, France
> Université Paul Sabatier - Toulouse 3
> 
>   phone +33 (0)5 61 33 26 17, fax +33 (0)5 61 33 25 60
> 
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

-- 
Andrea Dal Corso                    Tel. 0039-040-3787428
SISSA, Via Bonomea 265              Fax. 0039-040-3787249
I-34136 Trieste (Italy)             e-mail: dalcorso at sissa.it





More information about the users mailing list