[Pw_forum] pw.x parallelisation

Ullah, Habib hu203 at exeter.ac.uk
Fri Aug 12 01:49:51 CEST 2016


Thanks Stefano for your guidance. It means our QE is not properly installed. I am a new user of QE and someone has told me about the high value of ecutrho. One more thing, is my input is ok for the slab optimization?
Kind regards
Habib

-----Original Message-----
From: pw_forum-bounces at pwscf.org [mailto:pw_forum-bounces at pwscf.org] On Behalf Of Stefano de Gironcoli
Sent: 12 August 2016 12:33 AM
To: PWSCF Forum <pw_forum at pwscf.org>
Subject: Re: [Pw_forum] pw.x parallelisation

Dear Habib,
  There is nothing special in pw.x that happens between 7 and 8 processors.
  Must be an issue with your machine. 
  On the other hand using an ecutrho < 4 x ecutwfc is wrong and the code should complain about that.

stefano
(sent from my phone)

> On 12 Aug 2016, at 07:18, Ullah, Habib <hu203 at exeter.ac.uk> wrote:
> 
> Dear all
> 
> We have successfully installed Quantum Espresso on our cluster and calculations with 1 to 7 processors are ok... using example submission scripts as follows and job script as attached: But when increase the processor from 7, then the calculations stop without any error, although our cluster can run a simulation with 120 processors (in case of Metlab).
> 
> #!/bin/bash
> #$ -N qe
> #$ -q all.q
> #$ -cwd
> # Send mail at submission and completion of script #$ -m be #$ -M 
> hu203 at exeter.ac.uk # Parallel Job #$ -pe openmpi_ib 7 . 
> /etc/profile.d/modules.sh module load shared espresso cd 
> /home/ISAD/hu203/Test mpirun pw.x -inp 96BiVO4-001.in > 
> 96BiVO4-001.out
> 
> Searching online found this document:
> 
> http://www.quantum-espresso.org/wp-content/uploads/Doc/pw_user_guide.p
> df
> 
> pw.x can run in principle on any number of processors.
> 
> Wondering if the following is pertinent to our current problems (or if there is a problem with our Espresso installation on the cluster):
> 
> Parallelization on PWs:
> * yields good to very good scaling, especially if the number of 
> processors in a pool is a divisor of N3 and Nr3 (the dimensions along 
> the z-axis of the FFT grids, nr3 and nr3s, which coincide for NCPPs);
> * requires heavy communications (suitable for Gigabit ethernet up to 
> 4, 8 CPUs at most, specialized communication hardware needed for 8 or 
> more processors );
> * yields almost linear reduction of memory per processor with the number of processors in the pool.
> 
> We have Mellanox Infiniband cards between servers ... each server has 
> 192GB RAM and 4x 12 Core 2.4GHz AMD processor
> 
> Kind regards,
> Habib
> 
> <BiVO4-001-Big-scf.in.txt>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum


_______________________________________________
Pw_forum mailing list
Pw_forum at pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum




More information about the users mailing list