[Pw_forum] pw.x parallelisation
Stefano de Gironcoli
degironc at sissa.it
Fri Aug 12 01:32:57 CEST 2016
Dear Habib,
There is nothing special in pw.x that happens between 7 and 8 processors.
Must be an issue with your machine.
On the other hand using an ecutrho < 4 x ecutwfc is wrong and the code should complain about that.
stefano
(sent from my phone)
> On 12 Aug 2016, at 07:18, Ullah, Habib <hu203 at exeter.ac.uk> wrote:
>
> Dear all
>
> We have successfully installed Quantum Espresso on our cluster and calculations with 1 to 7 processors are ok... using example submission scripts as follows and job script as attached: But when increase the processor from 7, then the calculations stop without any error, although our cluster can run a simulation with 120 processors (in case of Metlab).
>
> #!/bin/bash
> #$ -N qe
> #$ -q all.q
> #$ -cwd
> # Send mail at submission and completion of script
> #$ -m be
> #$ -M hu203 at exeter.ac.uk
> # Parallel Job
> #$ -pe openmpi_ib 7
> . /etc/profile.d/modules.sh
> module load shared espresso
> cd /home/ISAD/hu203/Test
> mpirun pw.x -inp 96BiVO4-001.in > 96BiVO4-001.out
>
> Searching online found this document:
>
> http://www.quantum-espresso.org/wp-content/uploads/Doc/pw_user_guide.pdf
>
> pw.x can run in principle on any number of processors.
>
> Wondering if the following is pertinent to our current problems (or if there is a problem with our Espresso installation on the cluster):
>
> Parallelization on PWs:
> * yields good to very good scaling, especially if the number of processors in a pool is a divisor of N3 and Nr3 (the dimensions along the z-axis of the FFT grids, nr3 and nr3s, which coincide for NCPPs);
> * requires heavy communications (suitable for Gigabit ethernet up to 4, 8 CPUs at most, specialized communication hardware needed for 8 or more processors );
> * yields almost linear reduction of memory per processor with the number of processors in the pool.
>
> We have Mellanox Infiniband cards between servers ... each server has 192GB RAM and 4x 12 Core 2.4GHz AMD processor
>
> Kind regards,
> Habib
>
> <BiVO4-001-Big-scf.in.txt>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
More information about the users
mailing list