[Pw_forum] QE and mpich2, Linux

ac.rain at inbox.com ac.rain at inbox.com
Wed Feb 2 03:54:35 CET 2011

I reviewed that doc thank you, I wonder if it is a little out of date, mpirun redirects to mpiexec which does not support options "-nimage" and "-npool" , those are mentioned for Plane-wave tasks on page10.

I am running it now in blocks of 4 cpus to match the number-combination requirement.
4 cores on the same system completes in 1h47m , 16 cores on the same system completes in 2h19m.

Is this not going to get faster on a single system until we add more memory to the system? or perhaps to drop the number of cpus specified in mpiMachinefile.txt so it spreads a fewer number of jobs over a greater number of systems? or is the bottleneck to do with the local hard drives not being fast enough?

I am not familiar with the maths equations/terminology mentioned in the document.



> -----Original Message-----
> From: giannozz at democritos.it
> Sent: Mon, 31 Jan 2011 11:16:06 +0100
> To: pw_forum at pwscf.org
> Subject: Re: [Pw_forum] QE and mpich2, Linux
> in order to take advantage of parallelization, some understanding of
> how parallelization works in QE is needed. The user guide and this
> document:
> http://www.fisica.uniud.it/~giannozz/QE-Tutorial/tutorial_para.pdf
> contain some info. Throwing in more processors will not by itself do
> the job.
> P.
> ---
> Paolo Giannozzi, Dept of Chemistry&Physics&Environment,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum

Send your photos by email in seconds...
TRY FREE IM TOOLPACK at http://www.imtoolpack.com/default.aspx?rc=if3
Works in all emails, instant messengers, blogs, forums and social networks.

More information about the users mailing list