[QE-users] File-write problems in neb.x (6.4 and higher)
Ian Shuttleworth
shuttleworth.ian at gmail.com
Sun Apr 14 12:11:52 CEST 2019
Dear qE community
I am encountering a problem with neb.x.
The code executes fine for the 1st iteration only compile in 6.4 and above.
I am currently focussing on the 6.4.1 distribution.
neb.x calculates the first iteration with no problems but then fails to
restart for the second, i.e. the 'out' file looks like:
Program NEB v.6.4.1 starts on 13Apr2019 at 22:49:32
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
"P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details
at
http://www.quantum-espresso.org/quote
Parallel version (MPI), running on 360 processors
MPI processes distributed on 15 nodes
R & G space division: proc/nbgrp/npool/nimage = 360
parsing_file_name: test.in
Reading input from pw_1.in
file C.pz-n-kjpaw_psl.1.0.0.UPF: wavefunction(s) 2S 2P
renormalized
file H.pz-kjpaw_psl.1.0.0.UPF: wavefunction(s) 1S
renormalized
file Pt.pz-n-kjpaw_psl.1.0.0.UPF: wavefunction(s) 6S 5D
renormalized
Reading input from pw_2.in
file C.pz-n-kjpaw_psl.1.0.0.UPF: wavefunction(s) 2S 2P
renormalized
file H.pz-kjpaw_psl.1.0.0.UPF: wavefunction(s) 1S
renormalized
file Pt.pz-n-kjpaw_psl.1.0.0.UPF: wavefunction(s) 6S 5D
renormalized
initial path length = 7.0802 bohr
initial inter-image distance = 1.1800 bohr
string_method = neb
restart_mode = from_scratch
opt_scheme = broyden
num_of_images = 7
nstep_path = 10
CI_scheme = no-CI
first_last_opt = F
use_freezing = F
ds = 2.0000 a.u.
k_max = 0.3000 a.u.
k_min = 0.2000 a.u.
suggested k_max = 0.1542 a.u.
suggested k_min = 0.1028 a.u.
path_thr = 0.1000 eV / A
------------------------------ iteration 1
------------------------------
tcpu = 6.1 self-consistency for image 1
tcpu = 1455.9 self-consistency for image 2
tcpu = 2760.1 self-consistency for image 3
tcpu = 3665.8 self-consistency for image 4
tcpu = 4881.4 self-consistency for image 5
tcpu = 6379.5 self-consistency for image 6
tcpu = 7531.2 self-consistency for image 7
activation energy (->) = 0.533763 eV
activation energy (<-) = 0.874959 eV
image energy (eV) error (eV/A) frozen
1 -990477.5110581 0.022280 T
2 -990477.3291278 0.688721 F
3 -990477.0457248 0.978578 F
4 -990476.9772952 1.201974 F
5 -990477.2260733 1.146443 F
6 -990477.6294736 0.797173 F
7 -990477.8522547 0.023965 T
path length = 7.080 bohr
inter-image distance = 1.180 bohr
------------------------------ iteration 2
------------------------------
tcpu = 8726.4 self-consistency for image 2
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine pw_readschemafile (1):
xml data file not found
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
stopping …
I've checked the basics:
(1) the input file to neb.x has: restart_mode = 'from_scratch'
(2) the input file to neb.x has: nosym = .true.,
(in case the k-point distribution has changed though this is a low symmetry
system, so should be OK for NEB)
(3) the first and last images were optimised using pw.x from the same
distribution so I am reasonably sure it's not a problem with the geometry
or the pseudopotentials, for example.
Attempting the simulation with an earlier distribution but with a similar
compile strategy produces similar problems at the start of the second
iteration. I'm inclined to think therefore it's an I/O problem. I've tried
compiling with hdf5 but that doesn't seem to have fixed the problem. I'm
compiling using the Archer HPC based in the UK [
http://www.archer.ac.uk/about-archer/ ] which is a Cray XC30. My compile
script (without hdf5) is:
module swap PrgEnv-cray PrgEnv-intel
module load fftw
export CC=cc
export FC=ftn
export F77=ftn
export F90=ftn
./configure \
LDFLAGS="-static -I/opt/intel/parallel_studio_xe_2017_ce/mkl/include/
-I/opt/intel/parallel_studio_xe_2017_ce/mkl/include/intel64/lp64/" \
BLAS_LIBS="/opt/intel/parallel_studio_xe_2017_ce/mkl/lib/intel64/libmkl_sequential.a
/opt/intel/parallel_studio_xe_2017_ce/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.a
-Wl,--end-group" \
LAPACK_LIBS="/opt/intel/parallel_studio_xe_2017_ce/mkl/lib/intel64/libmkl_intel_lp64.a
/opt/intel/parallel_studio_xe_2017_ce/mkl/lib/intel64/libmkl_core.a" \
SCALAPACK_LIBS="/opt/intel/parallel_studio_xe_2017_ce/mkl/lib/intel64/libmkl_scalapack_lp64.a
-Wl,--start-group" \
FFT_LIBS="" \
--prefix=/work/e05/e05/ian/qe
make all
I've tried a number of variations on the above using various combinations
of 'include-hdf5' and associated module inclusions, but nothing seems to be
'catching'. So my questions are:
(1) Am I going in totally the wrong direction thinking hdf5 will solve
these problems? If I am, what else could I try?
(2) If I'm not in the wrong direction, has anyone got any suggestions of
what adjustments I could try for my input file?
With kindest thanks
Ian Shuttleworth
(Physics / Nottingham Trent University)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20190414/1e714f7c/attachment.html>
More information about the users
mailing list