[Pw_forum] Graphene phonon calculations too slow

Lorenzo Paulatto lorenzo.paulatto at impmc.upmc.fr
Thu May 26 09:44:53 CEST 2016


For an el-ph calculation you need the same grid for the (electronic) k and 
(vibrational) q points, but the q points can be safely obtained from a coarse 
grid (4x4x2 or 8x8x2) using Fourier interpolation (q2r+matdyn). 

On the other hand, the phonons on this coarse grid require 32x32x4 electronic 
k-points to be well converged. 

So: compute a coarse grid of phonon q-points with a fine grid of k-points, then 
interpolate the coarse grid to the fine grid and do el-ph with that.


On Wednesday, 25 May 2016 19:47:21 CEST Paolo Giannozzi wrote:
> Are you sure you need that many k-points and such a big cell in the z
> direction?
> Attached a quick-and-dirty test. Parameters are likely far from yielding
> converged values, but it takes 50' on a single processor of my 2007 vintage
> PC. Before starting a full-fledged calculation at convergence, you should
> experiment a bit. Also, tuning the parallelism may make a significant
> difference.
> 
> Paolo
> 
> On Wed, May 25, 2016 at 5:13 PM, Abualnaja, Faris <
> 
> f.abualnaja15 at imperial.ac.uk> wrote:
> > Dear all,
> > 
> > 
> > I am running phonon calculations on graphene, and the calculations are
> > taking days to complete! The calculations are run on a supercomputer and
> > are running in parallel. I am using QE version 4.3.1, the amount of
> > k-points that are calculated are 12x12x1 for an MP-grid which reduces down
> > to 19 irreducible symmetry points. My phonon input is below:
> > 
> > 
> > Phonon Dispersion of graphene
> > 
> >  &inputph
> >  
> >   recover=.true.
> >   
> >   tr2_ph=1.0d-16
> >   
> >   ldisp=.true.
> >   
> >   nq1=12 nq2=12 nq3=1
> >   
> >   prefix='graphene'
> >   
> >   amass(1)=12.01
> >   
> >   fildyn='graphene.dyn'
> >  
> >  /
> > 
> > and part of the output is shown here:
> > 
> > 
> > Parallel version (MPI), running on    48 processors
> > 
> >      K-points division:     npool     =    3
> >      
> >      R & G space division:  proc/pool =   16
> > 
> > Computing dynamical matrix for
> > 
> >                     q = (   0.0000000   0.0000000   0.0000000 )
> >      
> >      25 Sym.Ops. (with q -> -q+G )
> >      
> >      G cutoff =  245.0079  (  16214 G-vectors)     FFT grid: ( 32, 32,600)
> >      
> >      number of k points=   217  Methfessel-Paxton smearing, width (Ry)=
> > 
> > 0.0100
> > 
> > 
> > 
> > There are   4 irreducible representations
> > 
> >      Representation     1      1 modes -B_2g To be done
> >      
> >      
> >      Representation     2      2 modes -E_2g To be done
> >      
> >      
> >      Representation     3      1 modes -A_2u To be done
> >      
> >      
> >      Representation     4      2 modes -E_1u To be done
> > 
> > G cutoff =  245.0079  ( 259459 G-vectors)     FFT grid: ( 32, 32,600)
> > 
> >      Largest allocated arrays     est. size (Mb)     dimensions
> >      
> >         Kohn-Sham Wavefunctions         0.34 Mb     (   2220,  10)
> >         
> >         NL pseudopotentials             0.07 Mb     (   2220,   2)
> >         
> >         Each V/rho on FFT grid          0.59 Mb     (  38912)
> >         
> >         Each G-vector array             0.12 Mb     (  16224)
> >         
> >         G-vector shells                 0.05 Mb     (   6856)
> >      
> >      Largest temporary arrays     est. size (Mb)     dimensions
> >      
> >         Auxiliary wavefunctions         1.35 Mb     (   2220,  40)
> >         
> >         Each subspace H/S matrix        0.02 Mb     (     40,  40)
> >         
> >         Each <psi_i|beta_j> matrix      0.00 Mb     (      2,  10)
> >      
> >      The potential is recalculated from file :
> >      
> >      ./_ph0/graphene.save/charge-density.dat
> >      
> >      
> >      Starting wfc are    8 atomic +    2 random wfc
> >      
> >      
> >      total cpu time spent up to now is      2.64 secs
> >      
> >      
> >      per-process dynamical memory:    15.0 Mb
> >      
> >      
> >      Band Structure Calculation
> >      
> >      Davidson diagonalization with overlap
> >      
> >      c_bands:  1 eigenvalues not converged
> >      
> >      c_bands:  1 eigenvalues not converged
> >      
> >      c_bands:  1 eigenvalues not converged
> >      
> >      c_bands:  1 eigenvalues not converged
> >      
> >      c_bands:  1 eigenvalues not converged
> >      
> >      
> >      ethr =  1.25E-10,  avg # of iterations = 63.7
> >      
> >      
> >      total cpu time spent up to now is   4392.92 secs
> >      
> >      
> >      End of band structure calculation
> > 
> > Any suggestions on how to speed up the process and if I am missing
> > something?! Thank you in advance for all your kind help and time.
> > 
> > Regards,
> > 
> > Faris
> > Imperial
> > 
> > 
> > 
> > _______________________________________________
> > Pw_forum mailing list
> > Pw_forum at pwscf.org
> > http://pwscf.org/mailman/listinfo/pw_forum


-- 
Dr. Lorenzo Paulatto
IdR @ IMPMC -- CNRS & Université Paris 6
+33 (0)1 44 275 084 / skype: paulatz
http://www.impmc.upmc.fr/~paulatto/
23-24/4é16 Boîte courrier 115, 
4 place Jussieu 75252 Paris Cédex 05




More information about the users mailing list