[Pw_forum] Intel MPI

joaquin peralta jperaltac at gmail.com
Thu Aug 19 16:17:53 CEST 2010


Hello,

On Wed, Aug 18, 2010 at 12:28 PM, Paolo Giannozzi <giannozz at democritos.it>wrote:

>
> On Aug 18, 2010, at 19:03 , joaquin peralta wrote:
>
> > We use a surface of Al with 81 atoms in 128 cpus using intelMPI and
> > openMPI 1.4
>
> but not in the same way! the first job uses 8 pools of 16 processors,
> the second
>

Yes it's true, not the same way.


> 1 pool of 128 processors. The first choice seems to be reasonable,
> the second
>

After i recheck my times again, and compare a more realistic run.


> isn't, since an enormous amount of time is spent in communications
> (look at the
> time spent in fft_scatter).
>
>
Is possible know previously how is the best parameter to use with -np, I
mean if somebody recommend me a thread about the uses (i don't understand
very well the manual yet) of the optimization flags (pool, image, ntg,
diag).

Thanks in advance

Joaquin Peralta


> P.
> ---
> Paolo Giannozzi, Dept of Physics, University of Udine
> via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
>



-- 
----------------------------------------------------
Group of NanoMaterials
----------------------------------------------------
http://www.gnm.cl
----------------------------------------------------
Joaquín Andrés Peralta Camposano
----------------------------------------------------
http://zeth.ciencias.uchile.cl/~jperalta

In a world without frontiers,
who needs Gates and Win.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20100819/742ae77c/attachment.html>


More information about the users mailing list