[Pw_forum] Graphene phonon calculations too slow

Abualnaja, Faris f.abualnaja15 at imperial.ac.uk
Wed May 25 17:13:28 CEST 2016


Dear all,


I am running phonon calculations on graphene, and the calculations are taking days to complete! The calculations are run on a supercomputer and are running in parallel. I am using QE version 4.3.1, the amount of k-points that are calculated are 12x12x1 for an MP-grid which reduces down to 19 irreducible symmetry points. My phonon input is below:


Phonon Dispersion of graphene

 &inputph

  recover=.true.

  tr2_ph=1.0d-16

  ldisp=.true.

  nq1=12 nq2=12 nq3=1

  prefix='graphene'

  amass(1)=12.01

  fildyn='graphene.dyn'

 /


and part of the output is shown here:


Parallel version (MPI), running on    48 processors

     K-points division:     npool     =    3

     R & G space division:  proc/pool =   16

                  :

                  :

Computing dynamical matrix for

                    q = (   0.0000000   0.0000000   0.0000000 )

     25 Sym.Ops. (with q -> -q+G )

     G cutoff =  245.0079  (  16214 G-vectors)     FFT grid: ( 32, 32,600)

     number of k points=   217  Methfessel-Paxton smearing, width (Ry)=  0.0100

                  :
                  :


There are   4 irreducible representations


     Representation     1      1 modes -B_2g To be done


     Representation     2      2 modes -E_2g To be done


     Representation     3      1 modes -A_2u To be done


     Representation     4      2 modes -E_1u To be done

                  :
                  :

G cutoff =  245.0079  ( 259459 G-vectors)     FFT grid: ( 32, 32,600)


     Largest allocated arrays     est. size (Mb)     dimensions

        Kohn-Sham Wavefunctions         0.34 Mb     (   2220,  10)

        NL pseudopotentials             0.07 Mb     (   2220,   2)

        Each V/rho on FFT grid          0.59 Mb     (  38912)

        Each G-vector array             0.12 Mb     (  16224)

        G-vector shells                 0.05 Mb     (   6856)

     Largest temporary arrays     est. size (Mb)     dimensions

        Auxiliary wavefunctions         1.35 Mb     (   2220,  40)

        Each subspace H/S matrix        0.02 Mb     (     40,  40)

        Each <psi_i|beta_j> matrix      0.00 Mb     (      2,  10)


     The potential is recalculated from file :

     ./_ph0/graphene.save/charge-density.dat


     Starting wfc are    8 atomic +    2 random wfc


     total cpu time spent up to now is      2.64 secs


     per-process dynamical memory:    15.0 Mb


     Band Structure Calculation

     Davidson diagonalization with overlap

     c_bands:  1 eigenvalues not converged

     c_bands:  1 eigenvalues not converged

     c_bands:  1 eigenvalues not converged

     c_bands:  1 eigenvalues not converged

     c_bands:  1 eigenvalues not converged


     ethr =  1.25E-10,  avg # of iterations = 63.7


     total cpu time spent up to now is   4392.92 secs


     End of band structure calculation

                  :
                  :

Any suggestions on how to speed up the process and if I am missing something?! Thank you in advance for all your kind help and time.

Regards,

Faris
Imperial

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20160525/83a35783/attachment.html>


More information about the users mailing list