[QE-users] GRID parallel electron phonon coupling
Georg Eickerling
georg.eickerling at uni-a.de
Wed Jul 28 18:20:47 CEST 2021
Dear QE users,
I have a question concerning GRID parallel calculations for electron-phonon
couplings. I know this has been discussed on the list before, but I could not
find a conclusive answer:
I have a case with 24 irreps and 46 q-points, so this is going to take a
while. I have enough cores to run the 24 irreps on 24 machines at the same
time using 12 cores each. The machines are linked via slow network hardware.
Main question is: How to parallelize this most efficiently.
1. I tried to use images => This fails with error "not implemented" for
electron phonon coupling
2. I tried a GRID-run via a nested q- and irrep-loop, launching all irrep-jobs
in the same dir at once: this fails with IO errors accessing the same status.xml
3. I tried to adapt the GRID based phonon calculation example, copying the
out-dirs for each run in my nested q- and irrep-loop:
mkdir $TMP_DIR/$q.$irr
cp -r $TMP_DIR/$PREFIX.* $TMP_DIR/$q.$irr
mkdir -p $TMP_DIR/$q.$irr/_ph0/$PREFIX.phsave
cp -r $TMP_DIR/_ph0/$PREFIX.phsave/* $TMP_DIR/$q.$irr/_ph0/$PREFIX.phsave
This resulted in tremendous amounts of disc-usage but the job eventually
completed. Afterwards, I collected the results in one dir via
for q in `seq 1 46 ` ; do
for irr in `seq 1 24` ; do
\cp -f $TMP_DIR/$q.$irr/_ph0/$PREFIX.phsave/dynmat.$q.$irr.xml
$TMP_DIR/_ph0/$PREFIX.phsave 2> /dev/null
\cp -f $TMP_DIR/$q.$irr/_ph0/$PREFIX.phsave/elph.$q.$irr.xml
$TMP_DIR/_ph0/$PREFIX.phsave 2> /dev/null
done
\cp -f $TMP_DIR/$q.1/_ph0/$PREFIX.phsave/dynmat.$q.0.xml
$TMP_DIR/_ph0/$PREFIX.phsave 2> /dev/null
done
and finalized it by re-running ph.x with added "recover=.true.". This is a 1:1
adaptation of the according example script.
As I have to re-run similar jobs several times: Is version 3 the "best effort"
in parallelization and a valid procedure for electron phonon calculations?
Thank you in advance for any hints and best regards,
Georg
More information about the users
mailing list