[Pw_forum] job submission script

Jelle van Sijl jelle.van.sijl at falw.vu.nl
Tue Nov 11 10:17:04 CET 2008

Dear All,
I am having difficulties writing a good job submission script for pwscf in
parallel on a cluster with sometimes very slow NFS performance.
example job: 23h 4m CPU time for 23h 50m wall time
User files are stored in /home/user  (this is the slow part)
scratch space on each node is a variable (TMPDIR), which is fast.
When I submit the job (see script below) from the directory on /home/user,
pwscf stores everything in that folder but the repeated IO operations slow
down the job.
I already set disc_io=low and wf_collect=true
How can I work efficiently on the scratch space of each node, but still
find all output in /home/user after a successful run?

thanks in advance,

Jelle van Sijl
ps: I replaced my loginname on the cluster with 'user'

drs. Jelle van Sijl
PhD student, Department of Petrology
Faculty of Earth and Life Sciences (FALW)
VU University Amsterdam, De Boelelaan 1085
1081 HV Amsterdam, The Netherlands
Phone +31 (0)20 5987403

# Job script for running parallel pwscf 4.0.1
# on lisa. Created 11-08-2008 by Jelle van Sijl
# This is a job requesting 4 cores.
# request 2 nodes, 2 cores per node and
# each node equipped with the infiniband network:
#PBS -lnodes=2:ppn=2:infiniband:cores2
#PBS -lwalltime=01:00:00
#PBS -joe

# Edit this part:
export INP=INPUT
# Leave the rest.
# ---- start of MVAPICH related code:
# test if ~/.mpd.conf exists, create it if not:
if [ ! -e ~/.mpd.conf ]
          echo MPD_SECRETWORD=$USER$RANDOM$PPID$$ > ~/.mpd.conf
          chmod 600 ~/.mpd.conf
# determine the number of processes to start:
nprocs=`wc -l < $PBS_NODEFILE`
# determine the number of nodes:
nnodes=`sort -u $PBS_NODEFILE | wc -l`
# start the mpd daemons:
/usr/local/mvapich2-intel-0.9.8/bin/mpdboot -n $nnodes -f $PBS_NODEFILE
# ---- End of MVAPICH related code

export EXE=/home/user/espresso-4.0.1/bin/pw.x
export mpiexec=/usr/local/mvapich2-intel-0.9.8/bin/mpiexec
#export mpicopy=/usr/local/mpicopy/bin/mpicpbin.openib

$mpiexec -n $nprocs $EXE < $INP > $OUT

More information about the users mailing list