[QE-users] [neb.x] How to lessen I/O?

Takahiro Chiba takahiro_chiba at eis.hokudai.ac.jp
Wed Mar 17 12:20:40 CET 2021


Dear QE users,

To lessen I/O with neb.x, what should I do? The 'outdir' grows really
fast. I want advice from experienced users or devs.

Issue:
Although the image parallelism (neb.x -ni $NumOfNodes) is scaling well
with the number of nodes even with gigabit ethernet, massive I/O to
'outdir' is observed.
1. Regardless of the existence of the wfcdir option, wavefunctions
from each MPI processes (pwscf.wfc1, pwscf.wfc2, pwscf.wfc3, ...
pwscf.wfc$ProcPerNode) are written under 'outdir' except for the first
and the last image. Is this a bug?
2. When the 'disk_io' option is set to 'nowf', the neb calculation
fails at the beginning of the second iteration because wavefunction
files cannot be opened for writing. To execute properly, 'low' or
above is necessary. I think this is expected by the devs, but warning
at the beginning would be better.

These issues can be reproduced on:
a. QE 6.7, intel 19.5.281 (both MPI & compilers), CentOS 7, with LUSTRE
a. QE 6.5, intel 19.5.281 (both MPI & compilers), CentOS 7, with LUSTRE
b. QE 6.7, intel 19.5.281 (both MPI & compilers), CentOS 6, without LUSTRE
c. QE 6.5, intel 18.1.038 (both MPI & compilers), CentOS 6, without LUSTRE
( a = Subsystem A in this page:
https://www.hucc.hokudai.ac.jp/en/supercomputer/sc-overview/ )
According to the batch queue system on system A, it wrote 32.7GB/hour
when 4 x 40 processes each. The command is "mpiexec.hydra -n 160 neb.x
-ni 4 -i $INP >>$OUT".

Again, my question is, what can I do to lessen I/O other than the
'disk_io' option and 'wfcdir' option?

---Sender---
Takahiro Chiba
Undergraduate at Hokkaido University
takahiro_chiba at eis.hokudai.ac.jp
-----
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 5itr_OHrot.neb.in
Type: application/octet-stream
Size: 1539 bytes
Desc: not available
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20210317/3f408889/attachment.obj>


More information about the users mailing list