[QE-users] NEB under lelfield
Masahito YOSHINO
yoshino_m at aol.com
Wed Sep 3 10:44:33 CEST 2025
Dear Pietro,
Thank you for your prompt response.
After trying the patch, the issue improved slightly, but the following state occurred.
We are calculating with “num_of_images = 5”, where the fixed-end images are 1 and 5, and the intermediate images are 2, 3, and 4.
Before applying the patch, the line I mentioned earlier, “OH BOY ...”, appeared during the calculation of the first image (image1).
After applying the patch, the calculations for image1 and image5 completed successfully.
However, the same line appeared during the subsequent calculation of image3.
The calculation for image2 seems to be proceeding without issues, but the process appears to be stuck, likely waiting for image3 to finish.
For running the calculations, we use a command like:
`mpirun -np 48 -ni 3 -inp xxxxx.in > xxxxx.out`
Are there any command-line options we should be cautious about?
Additionally, whether CI_scheme is set to “auto” or “no-CI” seems to affect when the “OH BOY” line appears.
We would appreciate your advice.
Best regards,
Masahito YOSHINO
On 月, 9月 1, 2025 at 19:04, Pietro Davide Delugas<pdelugas at sissa.it> wrote: #yiv6239954021 P {margin-top:0;margin-bottom:0;}Hello It's a problem with the variable nproc, which represents the total number of processes, whereas you need to consider the processes in the band group for each image.Could you apply this patch and retry?
tentative_patch.diff
For applying the patch:cd PW/src/ patch < tentative_patch_diff
It has to be executed in the PW/src directory of your QE installation.
Let us know if it works Thank you, and best regards Pietro
From: users <users-bounces at lists.quantum-espresso.org> on behalf of Masahito YOSHINO via users <users at lists.quantum-espresso.org>
Sent: Monday, September 1, 2025 05:49
To: users at lists.quantum-espresso.org <users at lists.quantum-espresso.org>
Subject: [QE-users] NEB under lelfield
Dear all,
I am using QE version 7.3.1 on an Ubuntu 22.04 environment. I am also using Intel oneAPI for the compiler, MPI, etc.
I am performing neb.x calculations under a homogeneous finite electric field based on the modern theory of polarization (lelfield = .true.).
When calculating images one at a time using commands like “mpirun -np 16 -inp xxxxx.in > xxxxx.out”, the calculation proceeds without issues. However, when calculating images simultaneously using commands like “mpirun -np 48 -ni 3 -inp xxxxx.in > xxxxx.out”, the following problem occurs.
Below is an excerpt from PW.out in image1 ,
.....
End of self-consistent calculation
k = 0.0000 0.0000 0.0000 ( 13805 PWs) bands (ev):
-46.0910 -46.0650 -45.9979 -45.9869 -45.9276 -45.9010 -45.8950 -45.8401
-22.7314 -22.6129 -22.6032 -22.5686 -22.5587 -22.5497 -22.5272 -22.5158
.....
8.8268 8.9163 8.9827 9.3431 9.4554 9.4892 9.5100 9.5884
highest occupied level (ev): 9.5884
! total energy = -2303.12093350 Ry
estimated scf accuracy < 0.00000735 Ry
The total energy is the sum of the following terms:
one-electron contribution = -624.09930643 Ry
hartree contribution = 489.01939044 Ry
xc contribution = -463.47927866 Ry
ewald contribution = -1696.38943348 Ry
convergence has been achieved in 11 iterations
Writing all to output data dir xxxxx/xxx_1/xxx.save/ :
XML data file, charge density, pseudopotentials, collected wavefunctions
negative rho (up, down): 5.389E-01 0.000E+00
OH BOY -2086101920
OH BOY -1107549545
OH BOY 189234244
OH BOY -1106211815
OH BOY -1813111808
OH BOY -1114285954
.....
Reviewing the source code, it appears there may be issues with the values in `aux_rcv_ind(ig,iproc)` or `aux_g_mpi_ind(ig,mpime+1)` within `forces_bp_efield.f90`, but I couldn't understand it further.
I would appreciate your advice on this issue.
Masahito YOSHINO
JAPAN
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20250903/2fd0ca16/attachment.html>
More information about the users
mailing list