[Pw_forum] Do you think QE could benefit from the Hyper-Threading Tech of XEON 5500 series?

vega lew quantumdft at gmail.com
Sun Nov 29 19:45:13 CET 2009


Dear friend,

thank you for your reply in detail.

I also tested my servers using cp.x code with 5 steps running.

The conclusion is Hyper-threading of Xeon E5520 will result in a small
decrease of the performance.
When the Hyper-threading is switched on in BIOS,
two nodes with 32 process parallelly calculating the task will consume
2m19.41s CPU time,     6m37.24s wall time
two nodes with 16 process parallelly calculating the task will consume 3m
7.28s CPU time,     7m 0.30s wall time
one node with 16 process parallelly calculating the task will consume
5m30.79s CPU time,     6m18.88s wall time
one node with 8 process parallelly calculating the task will consume 5m
2.42s CPU time,     8m50.53s wall time
But when we switched off the Hyper-threading in BIOS,
one node with 8 process parallelly calculating the task will consume
5m28.05s CPU time,     5m58.40s wall time
two nodes with 16 process parallelly calculating the task will consume
2m32.28s CPU time,     6m16.31s wall tim

thank you again for your time.

best,

vega

On Sun, Nov 29, 2009 at 4:59 PM, O. Baris Malcioglu <
baris.malcioglu at gmail.com> wrote:

> Dear Vega,
>
> I am afraid what you are asking is a bit technical question that is
> not strictly related to Quantum Espresso. Perhaps the many discussions
> and benchmarks in the forums regarding the benefits/losses of
> hyper-threading in HPC environments may be of more use to you.
>
> If it will really help my "enthusiast" level, maybe outdated viewpoint
> on the matter is this.
>
> The cores enabled by Hyper-threading are not as performing as a "real"
> core, thus you obtain two somewhat crippled cores instead of one
> performing core.
>
> Now, if your memory bandwith is enough, and your program can take care
> of parallelization effectively this is a nice thing, it really makes
> things faster.
>
> But if your memory bandwith is not so high, you are wasting it for
> some crippled cores. If the program you are using is intensive on the
> memory, this is quite a bad thing no matter how well parallelized it
> is.
>
> Most of the server architectures like XEON are optimized for
> relialibility, and you will notice that most boards out there offer
> significantly less memory bandwith, especially if there are error
> correction measures, compared to their desktop equivalents.
>
> Adding this the fact that this already low bandwith is shared by many
> other cores, the memory bandwith becomes a very very precious resource
> in a server used in number crunching.
>
> Now, the performance of QE in such an environment will depend on how
> you use and compile it.
>
> In my experience using the "standart" parallelization with lapack/blas
> the QE is intensive on the memory of the cores, but somewhat light on
> the inter-core communication. In this case, the hyperthrading will
> probably slow things down on the single job level (Remember, HPCs are
> multi-user environments, if you have many users, although somewhat
> slow on the job level, hyperthreading might still be beneficial due to
> higher number of jobs accepted per unit time)
>
> Although I have not yet tested myself, threading instead of spawning
> processes like the openMP parallelization should work much better on
> the per job level in a hyper-threading environment, of course,
> provided you have proper threaded libraries installed and configured.
> For example MKL has threading support.
>
> So in short, if you are the single user of that machine, try OpenMP +
> threaded libraries. If it doesn't help try switching hyper threading
> off and compare your benchmarks. If you have many users, it is
> probably better to leave it on.
>
> I hope this helps,
>
> Best,
> Baris
>
>
> 2009/11/26 vega lew <quantumdft at gmail.com>:
> > Dear all,
> >
> > I got 5 computers with two XEON 5520 and 24G memory. I find the
> > Hyper-Threading is switched on by default. I wonder if QE could benefit
> from
> > Hyper-Threading Tech. of Xeon CPUs?
> > Should I switch it off and trigger one process for each core to promote
> the
> > QE performance?
> > Or remain the Hyper-Threading on and trigger two processes for each core?
> >
> > best
> >
> > vega
> >
> > --
> >
> ==================================================================================
> > Vega Lew ( weijia liu)
> > Graduate student
> > State Key Laboratory of Materials-oriented Chemical Engineering
> > College of Chemistry and Chemical Engineering
> > Nanjing University of Technology, 210009, Nanjing, Jiangsu, China
> >
> ******************************************************************************************************************
> > Email: vegalew at gmail.com
> > Office: Room A705, Technical Innovation Building, Xinmofan Road 5#,
> Nanjing,
> > Jiangsu, China
> >
> ******************************************************************************************************************
> >
> > _______________________________________________
> > Pw_forum mailing list
> > Pw_forum at pwscf.org
> > http://www.democritos.it/mailman/listinfo/pw_forum
> >
> >
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
>



-- 
==================================================================================
Vega Lew ( weijia liu)
Graduate student
State Key Laboratory of Materials-oriented Chemical Engineering
College of Chemistry and Chemical Engineering
Nanjing University of Technology, 210009, Nanjing, Jiangsu, China
******************************************************************************************************************
Email: vegalew at gmail.com
Office: Room A705, Technical Innovation Building, Xinmofan Road 5#, Nanjing,
Jiangsu, China
******************************************************************************************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20091130/3bfc1ef7/attachment.html>


More information about the users mailing list