[Pw_forum] better phonon parallelization

adwait mevada adwait.mevada at gmail.com
Thu Sep 3 21:56:20 CEST 2015


Dear QE users,
I am running phonon calculations on Intel Xeon W3520 processor
it has 8 cores @2.66GHz with 16gb ram

I am currently doing phonon runs for mgzn2 system
initially my command for running ph.x for 4 processors

mpirun -np 4 ph.x -ndiag 2 -in mgzn2.in > mgzn2.out

which had following details in the out file
     Parallel version (MPI), running on     4 processors
     R & G space division:  proc/nbgrp/npool/nimage =       4
     ...
     Alpha used in Ewald sum =   2.8000
     PHONON       :     9h12m CPU        9h18m WALL

observing that band structure calculation it self has taken 9hr
the whole calculation would take much longer time.
so i decided to use all 8 cores

mpirun -np 8 ph.x -nk 2 -in mgzn2-ph.in > mgzn2-ph.out

     Parallel version (MPI), running on     8 processors
     R & G space division:  proc/nbgrp/npool/nimage =       8
     ...
     Alpha used in Ewald sum =   2.8000
     PHONON       :     5h18m CPU        5h21m WALL

does any one have an idea as to what possible combination
would ensure a much faster calculation ?

I also have a question regarding k points calcuated in phonons.
while doing scf calculations for mgzn2 the number of k points
were 60, but for the same input to phonon calculations the
number of k points increased to 312 why is that the case?

lastly for the phonon calculation the number of symmetries found is 24
along with inversion but for q point calculation the symmetries reduce to 9?
Isnt the brillouin zone for hcp structure also an hcp?
   24 Sym. Ops., with inversion, found

     Computing dynamical matrix for
                    q = (   0.0000000  -0.5773503   0.0000000 )

      9 Sym.Ops. (with q -> -q+G )

Thanks & Regards,

Adwait Mevada,
Ph.D. Student,
Gujarat University,
India.


-- 
-Adwait
'I Wonder!...The Empty Cup'
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20150904/29001e02/attachment.html>


More information about the users mailing list