<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>Dear all, </div><div id="AppleMailSignature"> I'm working on a different R&G distribution that could scale more favorably w number of processors. More will be presenter at next January developers meeting. It may appears in the distribution even before then if I'm happy with it.</div><div id="AppleMailSignature">I'm in contact with CarloC and PaoloG about that,<br>Best<br>stefano <div>(sent from my phone)</div></div><div><br>On 20 Oct 2016, at 02:44, Ye Luo <<a href="mailto:xw111luoye@gmail.com">xw111luoye@gmail.com</a>> wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr"><div><div><div><div><div><div><div>Dear Carlo,<br><br></div>It is very enlightening to read your comment on architecture and QE.<br></div>Do you have more recent technical talks about the re-factoring of QE?<br></div>My knowledge is still limited to your talks during 2012 when BG/Q is introduced and QE dev meetings slides in 2014/2015.<br><br></div>Having more advanced libraries definitely helps the performance but changing the up level can probably be more beneficial.<br></div>I'm curious to know more recent advances in QE about changing internal data layout (mainly the wavefunction distribution) for better parallelization.<br></div><div>Has the task group been more general rather than only FFT? Merging band group and task group if possbile...<br></div><div></div>Though the hybrid functional or GW can be implemented with better parallel efficiency, the plain DFT part can be a severe bottle neck in the computation.<br></div><div><br></div><div>Best,<br></div>Ye<br><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">===================<br>
Ye Luo, Ph.D.<br>
Leadership Computing Facility<br>
Argonne National Laboratory</div></div></div>
<br><div class="gmail_quote">2016-10-19 2:53 GMT-05:00 Carlo Cavazzoni <span dir="ltr"><<a href="mailto:c.cavazzoni@cineca.it" target="_blank">c.cavazzoni@cineca.it</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div class="m_-5515753018799929050moz-cite-prefix">Dear Sergio,<br>
<br>
we are obviously addressing al that issues on different
architectures with different vendors,<br>
and here it come the point, architectures are not converging!<br>
As you know there are two main basic designs Homogeneous and
Heterogeneous (a.k.a. accelerated),<br>
with some, like Intel, that oscillate between both (KNC
Heterogeneous, KNL Homogeneous, <br>
and recently announced Stratix X FPGA Heterogeneous again).<br>
Is not that easy to have a code coping with all of them in an
effective way, especially because<br>
some of the best tools for new architectures are not standard
(CUDA) and this is a real pity,<br>
and make me complain with Nvidia all the time for them not
supporting a standard paradigm<br>
(yes I know, there is OpenACC, new OpenMP feature, OpenCL ... but
CUDA remains by far more effective),<br>
this is a sort of disruption for community of developers like
ourself.<br>
<br>
Nevertheless to reduce this complexity we recently encapsulate the
two main computational kernels<br>
(parallel FFT and Linear Algebra) into self contained libraries
(FFTXlib and LAXlib) including a small<br>
app (please read README files included in the two library) that
allow one to experiment and<br>
best tune all the parameters for parallelization, vectorization,
tasking etc..).<br>
<br>
To play with the two libraries you need to know very little about
the physics of the QE,<br>
and are the ideal for persons like you that need to look into
optimization stuff.<br>
In particular any improvements in these two libraries are
immediately transferred into the QE<br>
main codes (and other as well).<br>
<br>
If you want to know more about our next developments, we are
working with<br>
non blocking MPI collectives and task based parallelism to try to
overlap<br>
communications and computations within the FFT.<br>
Most recent (not production) advancements in FFT lib could be
found at:<br>
<pre><a class="m_-5515753018799929050moz-txt-link-freetext" href="https://github.com/fabioaffinito/FFTXlib" target="_blank">https://github.com/<wbr>fabioaffinito/FFTXlib</a></pre>
<br>
<br>
Another interesting exercise could be to review the LAXlib
following<br>
closely the advancement in dense linear algorithms promoted by
Dongarra et all<br>
<a class="m_-5515753018799929050moz-txt-link-freetext" href="http://insidehpc.com/2016/10/jack-dongarra-presents-adaptive-linear-solvers-and-eigensolvers/" target="_blank">http://insidehpc.com/2016/10/<wbr>jack-dongarra-presents-<wbr>adaptive-linear-solvers-and-<wbr>eigensolvers/</a><br>
<br>
From the point of view of the paradigms we are supporting open
initiatives,<br>
especially in close collaboration with BSC and different
standardization committees (like OpenMP),<br>
or the recently announced effort promoted by AMD to open source
software and drivers<br>
for heterogeneous architectures:
<a class="m_-5515753018799929050moz-txt-link-freetext" href="https://radeonopencompute.github.io/" target="_blank">https://radeonopencompute.<wbr>github.io/</a><br>
<br>
<br>
best,<br>
carlo<div><div class="h5"><br>
<br>
<br>
Il 18/10/2016 16:14, Sérgio Caldas ha scritto:<br>
</div></div></div>
<blockquote type="cite"><div><div class="h5">
<div><font size="3">Hi!</font></div>
<div><font size="3"><br>
</font></div>
<div><font size="3">I'm Sérgio Caldas, an MSc
student in Informatics Engineering at University of Minho,
Braga, Portugal. <span>The key area of specialisation
during my master courses were on parallel computing, with a
strong focus on efficient & performance engineering on
heterogeneous systems. For my master thesis the theme
applies these competences to computational physics, where
I’m supposed to help a senior physics researcher to tune his
work on the determination of electronic and optical
properties of materials, using Quantum Espresso tool in our
departamental cluster. This cluster has nodes with several
generations of dual multicore Xeons and some nodes with Xeon
Phi (both KNC and KNL) and GPUs (both Fermi and Kepler, and
soon Pascal). </span></font></div>
<div><span style="color:rgb(131,17,0);font-size:12pt;font-family:Calibri,Arial,Helvetica,sans-serif"><br>
</span></div>
<div><font size="3">I have some queries on the
QE, namely how far QE development has reached in these areas
(vectorisation, data/task parallelism on both
shared/distributed memory, data locality). </font></div>
<div><font size="3"><br>
</font></div>
<div><font size="3">For example:<br>
<font> - QE<span> is already exploring
vector operations (AVX/AVX-2 or AVX-512)?</span></font></font></div>
<div><font size="3"><font> - t</font><span>he tool is ready for multicore / many-core devices?</span></font></div>
<div><font size="3"> - how is the scheduling
between multicore-devices and the accelerator devices, such
that both type of devices are simultaneously used?</font></div>
<div><font size="3"> - for distributed memory,
the tool is already taking advantage of low-latency
interconnection topologies, such as Myrinet or Infiniband?</font></div>
<div><font size="3"> - how can I have access to
beta versions where this advanced capabilities are being
explored?</font></div>
<div><font size="3"> - do you have suggestions
of areas that still need to be improved, so that I can address
those areas and improve both the quality of my work and the
overall QE performance?</font></div>
<div><font size="3"><br>
</font></div>
<div><font size="3"><font>I would also
be grateful if you could suggest documentation (preferably
papers) to get some of these answers or any other
documentation to complement my </font><font>knowledge</font><span> on QE.</span></font></div>
<div><span><font size="3"><br>
</font></span></div>
<div><span><font size="3"><font>Thanking you in advance, yours s</font><span>incerely</span></font></span></div>
<div><font size="3">Sergio Caldas</font></div>
<br>
<fieldset class="m_-5515753018799929050mimeAttachmentHeader"></fieldset>
<br>
</div></div><pre>______________________________<wbr>_________________
Q-e-developers mailing list
<a class="m_-5515753018799929050moz-txt-link-abbreviated" href="mailto:Q-e-developers@qe-forge.org" target="_blank">Q-e-developers@qe-forge.org</a>
<a class="m_-5515753018799929050moz-txt-link-freetext" href="http://qe-forge.org/mailman/listinfo/q-e-developers" target="_blank">http://qe-forge.org/mailman/<wbr>listinfo/q-e-developers</a><span class="HOEnZb"><font color="#888888">
</font></span></pre><span class="HOEnZb"><font color="#888888">
</font></span></blockquote><span class="HOEnZb"><font color="#888888">
<br>
<p><br>
</p>
<pre class="m_-5515753018799929050moz-signature" cols="72">--
Ph.D. Carlo Cavazzoni
SuperComputing Applications and Innovation Department
CINECA - Via Magnanelli 6/3, 40033 Casalecchio di Reno (Bologna)
Tel: <a href="tel:%2B39%20051%206171411" value="+390516171411" target="_blank">+39 051 6171411</a> Fax: <a href="tel:%2B39%20051%206132198" value="+390516132198" target="_blank">+39 051 6132198</a>
<a class="m_-5515753018799929050moz-txt-link-abbreviated" href="http://www.cineca.it" target="_blank">www.cineca.it</a></pre>
</font></span></div>
<br>______________________________<wbr>_________________<br>
Q-e-developers mailing list<br>
<a href="mailto:Q-e-developers@qe-forge.org">Q-e-developers@qe-forge.org</a><br>
<a href="http://qe-forge.org/mailman/listinfo/q-e-developers" rel="noreferrer" target="_blank">http://qe-forge.org/mailman/<wbr>listinfo/q-e-developers</a><br>
<br></blockquote></div><br></div></div>
</div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>Q-e-developers mailing list</span><br><span><a href="mailto:Q-e-developers@qe-forge.org">Q-e-developers@qe-forge.org</a></span><br><span><a href="http://qe-forge.org/mailman/listinfo/q-e-developers">http://qe-forge.org/mailman/listinfo/q-e-developers</a></span><br></div></blockquote></body></html>