[Pw_forum] Graphene phonon calculations too slow
Paolo Giannozzi
p.giannozzi at gmail.com
Wed May 25 19:47:21 CEST 2016
Are you sure you need that many k-points and such a big cell in the z
direction?
Attached a quick-and-dirty test. Parameters are likely far from yielding
converged values, but it takes 50' on a single processor of my 2007 vintage
PC. Before starting a full-fledged calculation at convergence, you should
experiment a bit. Also, tuning the parallelism may make a significant
difference.
Paolo
On Wed, May 25, 2016 at 5:13 PM, Abualnaja, Faris <
f.abualnaja15 at imperial.ac.uk> wrote:
> Dear all,
>
>
> I am running phonon calculations on graphene, and the calculations are
> taking days to complete! The calculations are run on a supercomputer and
> are running in parallel. I am using QE version 4.3.1, the amount of
> k-points that are calculated are 12x12x1 for an MP-grid which reduces down
> to 19 irreducible symmetry points. My phonon input is below:
>
>
> Phonon Dispersion of graphene
>
> &inputph
>
> recover=.true.
>
> tr2_ph=1.0d-16
>
> ldisp=.true.
>
> nq1=12 nq2=12 nq3=1
>
> prefix='graphene'
>
> amass(1)=12.01
>
> fildyn='graphene.dyn'
>
> /
>
> and part of the output is shown here:
>
>
> Parallel version (MPI), running on 48 processors
>
> K-points division: npool = 3
>
> R & G space division: proc/pool = 16
>
> :
>
> :
>
> Computing dynamical matrix for
>
> q = ( 0.0000000 0.0000000 0.0000000 )
>
> 25 Sym.Ops. (with q -> -q+G )
>
> G cutoff = 245.0079 ( 16214 G-vectors) FFT grid: ( 32, 32,600)
>
> number of k points= 217 Methfessel-Paxton smearing, width (Ry)=
> 0.0100
> :
> :
>
> There are 4 irreducible representations
>
>
> Representation 1 1 modes -B_2g To be done
>
>
> Representation 2 2 modes -E_2g To be done
>
>
> Representation 3 1 modes -A_2u To be done
>
>
> Representation 4 2 modes -E_1u To be done
> :
> :
>
> G cutoff = 245.0079 ( 259459 G-vectors) FFT grid: ( 32, 32,600)
>
>
> Largest allocated arrays est. size (Mb) dimensions
>
> Kohn-Sham Wavefunctions 0.34 Mb ( 2220, 10)
>
> NL pseudopotentials 0.07 Mb ( 2220, 2)
>
> Each V/rho on FFT grid 0.59 Mb ( 38912)
>
> Each G-vector array 0.12 Mb ( 16224)
>
> G-vector shells 0.05 Mb ( 6856)
>
> Largest temporary arrays est. size (Mb) dimensions
>
> Auxiliary wavefunctions 1.35 Mb ( 2220, 40)
>
> Each subspace H/S matrix 0.02 Mb ( 40, 40)
>
> Each <psi_i|beta_j> matrix 0.00 Mb ( 2, 10)
>
>
> The potential is recalculated from file :
>
> ./_ph0/graphene.save/charge-density.dat
>
>
> Starting wfc are 8 atomic + 2 random wfc
>
>
> total cpu time spent up to now is 2.64 secs
>
>
> per-process dynamical memory: 15.0 Mb
>
>
> Band Structure Calculation
>
> Davidson diagonalization with overlap
>
> c_bands: 1 eigenvalues not converged
>
> c_bands: 1 eigenvalues not converged
>
> c_bands: 1 eigenvalues not converged
>
> c_bands: 1 eigenvalues not converged
>
> c_bands: 1 eigenvalues not converged
>
>
> ethr = 1.25E-10, avg # of iterations = 63.7
>
>
> total cpu time spent up to now is 4392.92 secs
>
>
> End of band structure calculation
>
> :
> :
>
> Any suggestions on how to speed up the process and if I am missing
> something?! Thank you in advance for all your kind help and time.
>
> Regards,
>
> Faris
> Imperial
>
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
--
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20160525/238719e5/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graph0.j
Type: application/octet-stream
Size: 1660 bytes
Desc: not available
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20160525/238719e5/attachment.obj>
More information about the users
mailing list