[Pw_forum] Hybrid MPI/OpenMPI

Ivan Girotto igirotto at ictp.it
Sat Nov 9 13:14:41 CET 2013


Dear Ben,

I'm afraid you are packing all processes within a node on a same socket 
(-bind-to-socket).
My recommendation is to use the following: -cpus-per-proc 2 -bind-to-core.
However, for the pw.x code there is no much expectation to get better 
performance on Intel Xeon arch using MPI+OpenMP till communication 
becomes a serious bottleneck.
Indeed, the parallel work distribution among MPI processes offers in 
general better scaling.

Regards,

Ivan

On 08/11/2013 13:45, Ben Palmer wrote:
> Hi Everyone,
>
> (apologies if this has been sent twice)
>
> I have compiled QE 5.0.2 on a computer with AMD interlagos processors, 
> using the acml, compiling with openmp enabled, and submitting jobs 
> with PBS.  I've had a speed up using 2 openmp threads per mpi process.
>
> I've been trying to do the same on another computer, that has MOAB as 
> the scheduler, E5 series xeon processors (E5-2660) and uses the Intel 
> MKL (E5-2660).  I'm pretty sure hyperthreading has been turned off, as 
> each node has two sockets and 16 cores in total.
>
> I've seen a slow down in performance using OpenMP and MPI, but have 
> read that this might be the case in the documentation.  I'm waiting in 
> the computer's queue to run the following:
>
> #!/bin/bash
> #MOAB -l "nodes=2:ppn=16"
> #MOAB -l "walltime=0:01:00"
> #MOAB -j oe
> #MOAB -N pwscf_calc
> #MOAB -A readmsd02
> #MOAB -q bbtest
> cd "$PBS_O_WORKDIR"
> module load apps/openmpi/v1.6.3/intel-tm-ib/v2013.0.079
> export PATH=$HOME/bin:$PATH
> export OMP_NUM_THREADS=2
> mpiexec -np 16 -x OMP_NUM_THREADS=2 -npernode 8 -bind-to-socket 
> -display-map -report-bindings pw_openmp_5.0.2.x -in benchmark2.in 
> <http://benchmark2.in/>> benchmark2c.out
>
> I just wondered if anyone had any tips on the settings or flags for 
> hybrid MPI/OpenMP with the E5 Xeon processors?
>
> All the best,
>
> Ben Palmer
> Student @ University of Birmingham, UK
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20131109/7fe751b9/attachment.html>


More information about the users mailing list