[Pw_forum] GIPAW fails for large system
Paweł Rejmak
rejmakp at gmail.com
Fri Mar 4 11:04:40 CET 2011
Hi everyone
I am the beginning user of QE and I am trying to calculate NMR
shielding for Si atoms in hydrated calcium silicates using GIPAW.
Unfortunately, all alculations for large systems, up to 200 atoms per
unit cell, have failed. I have accomplished GIPAW calculations for
smaller systems (up to 100 atoms, Davidson diagonalization), thus I am
(almost) sure that everything is OK with my inputs, SCF calculations,
pseudopotentials (taken from
http://www.impmc.jussieu.fr/~software/gipaw/pseudopotentials.html)
etc. and the source of problem is simply due to the size of studied
system.
With CG diagonalization I got error like this:
task # 0
from cdiaghg : error # numbers in range 255-790
diagonalization (ZHEGV*) failed
With Davidson diagonalization I got for example:
forrtl: error (78): process killed (SIGTERM)
Image PC Routine Line
Source
gipaw.x 400000000062C5E0 Unknown Unknown Unknown
gipaw.x 40000000006568E0 Unknown Unknown Unknown
gipaw.x 4000000000656110 Unknown Unknown Unknown
gipaw.x 4000000000656B70 Unknown Unknown Unknown
gipaw.x 4000000000655310 Unknown Unknown Unknown
gipaw.x 4000000000652F50 Unknown Unknown Unknown
gipaw.x 40000000003C4660 fft_scalar_mp_cft 761
fft_scalar.f90
gipaw.x 40000000003BC4B0 fft_parallel_mp_t 142
fft_parallel.f90
gipaw.x 40000000000F86C0 cft3s_ 52 cft3s.f90
gipaw.x 4000000000038740 h_psiq_ 74 h_psiq.f90
gipaw.x 4000000000042190 ch_psi_all_ 60
ch_psi_all.f90
gipaw.x 400000000003B360 cgsolve_all_ 190
cgsolve_all.f90
gipaw.x 4000000000037360 greenfunction_ 132
greenfunction.f90
gipaw.x 4000000000057EB0 suscept_crystal_I 319
suscept_crystal.f90
gipaw.x 400000000004FC40 suscept_crystal_ 125
suscept_crystal.f90
gipaw.x 4000000000024B90 MAIN__ 77
gipaw_main.f90
gipaw.x 4000000000004E90 Unknown Unknown Unknown
libc.so.6.1 200000000052FC20 Unknown Unknown Unknown
gipaw.x 4000000000004C80 Unknown Unknown Unknown
Is there any solution to this problem? Should I simply increase amount
of memory per process? (so far I have tried 30GB per process) Or maybe
changing some constant value (like some fixed array dimension) in
source code and compiling it again would help? Or is my system simply
too large and should I abandon all hope?:(
Thank you in advance for your help
Pawel Rejmak, Donostia International Physics Center
P. S. Inputs below.
########
PW input:
&CONTROL
calculation = 'scf' ,
restart_mode = 'from_scratch' ,
outdir = 'tmp' ,
wf_collect = .TRUE. ,
pseudo_dir = '/scratch/prejmak/qecement1/pseudos' ,
disk_io = 'default' ,
verbosity = 'high' ,
/
&SYSTEM
ibrav = 14 ,
A = 6.746900,
B = 14.952840,
C = 26.451853,
cosAB = -0.565691,
cosAC = -0.078344,
cosBC = -0.018083,
nat = 194,
ntyp = 4,
ecutwfc = 80 ,
input_dft = PBE ,
/
&ELECTRONS
electron_maxstep = 300,
conv_thr = 1.d-6 ,
startingpot = 'atomic' ,
startingwfc = 'atomic' ,
mixing_mode = 'local-TF' ,
mixing_beta = 0.1 ,
diagonalization = 'cg' ,
/
ATOMIC_SPECIES
Si 28.08550 Si.UPF
O 15.99940 O.UPF
Ca 40.07800 Ca.UPF
H 1.00794 H.UPF
ATOMIC_POSITIONS crystal
Ca 0.24000 0.83000 0.55000
...
K_POINTS automatic
1 1 1 0 0 0
###########
GIPAW input:
&inputgipaw
job = 'nmr'
prefix = 'pwscf'
tmp_dir = 'tmp/'
iverbosity = 1
q_gipaw = 0.01
spline_ps = .true.
isolve = 0
use_nmr_macroscopic_shape = .true.
/
--
Dr. Paweł Rejmak
Donostia International Physics Center
Paseo Manuel de Lardizabal 4
Donostia -San Sebastian, Spain
More information about the users
mailing list