<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Dear Ben,<br>
    <br>
    I'm afraid you are packing all processes within a node on a same
    socket (-bind-to-socket).<br>
    My recommendation is to use the following: -cpus-per-proc 2
    -bind-to-core.<br>
    However, for the pw.x code there is no much expectation to get
    better performance on Intel Xeon arch using MPI+OpenMP till
    communication becomes a serious bottleneck.  <br>
    Indeed, the parallel work distribution among MPI processes offers in
    general better scaling.  <br>
    <br>
    Regards,<br>
    <br>
    Ivan<br>
    <span style="font-family:arial,sans-serif;font-size:13px"></span><br>
    On 08/11/2013 13:45, Ben Palmer wrote:
    <blockquote
cite="mid:CAFJKZSO5tNqMT_w_VTttt0tesqAMb18d=2PcpeZ=np8cDvU3nQ@mail.gmail.com"
      type="cite">
      <div dir="ltr"><span
          style="font-family:arial,sans-serif;font-size:13px">Hi
          Everyone,</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <br>
        (apologies if this has been sent twice)<br>
        <br style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">I have
          compiled QE 5.0.2 on a computer with AMD interlagos
          processors, using the acml, compiling with openmp enabled, and
          submitting jobs with PBS.  I've had a speed up using 2 openmp
          threads per mpi process.</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <br style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">I've
          been trying to do the same on another computer, that has MOAB
          as the scheduler, E5 series xeon processors (E5-2660) and uses
          the Intel MKL (E5-2660).  I'm pretty sure hyperthreading has
          been turned off, as each node has two sockets and 16 cores in
          total.</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <br style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">I've
          seen a slow down in performance using OpenMP and MPI, but have
          read that this might be the case in the documentation.  I'm
          waiting in the computer's queue to run the following:</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <br style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">#!/bin/bash</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">#MOAB
          -l "nodes=2:ppn=16"</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">#MOAB
          -l "walltime=0:01:00"</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">#MOAB
          -j oe</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">#MOAB
          -N pwscf_calc</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">#MOAB
          -A readmsd02</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">#MOAB
          -q bbtest</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">cd
          "$PBS_O_WORKDIR"</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">module
          load apps/openmpi/v1.6.3/intel-tm-</span><span
          style="font-family:arial,sans-serif;font-size:13px">i</span><span
          style="font-family:arial,sans-serif;font-size:13px">b/v2013.0.079</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">export
          PATH=$HOME/bin:$PATH</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">export
          OMP_NUM_THREADS=2</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">mpiexec
          -np 16 -x OMP_NUM_THREADS=2 -npernode 8 -bind-to-socket
          -display-map -report-bindings pw_openmp_5.0.2.x -in </span><a
          moz-do-not-send="true" href="http://benchmark2.in/"
          target="_blank"
          style="font-family:arial,sans-serif;font-size:13px">benchmark2.in</a><span
          style="font-family:arial,sans-serif;font-size:13px"> >
          benchmark2c.out</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <br style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">I just
          wondered if anyone had any tips on the settings or flags for
          hybrid MPI/OpenMP with the E5 Xeon processors?</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <br style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">All
          the best,</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <br style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">Ben
          Palmer</span><br
          style="font-family:arial,sans-serif;font-size:13px">
        <span style="font-family:arial,sans-serif;font-size:13px">Student
          @ University of Birmingham, UK</span><br>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Pw_forum mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a>
<a class="moz-txt-link-freetext" href="http://pwscf.org/mailman/listinfo/pw_forum">http://pwscf.org/mailman/listinfo/pw_forum</a></pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">
</pre>
  </body>
</html>