[Pw_forum] SCF_PARALLEL
stefano de gironcoli
degironc at sissa.it
Fri Nov 6 22:42:58 CET 2015
you can try to figure out what's happening looking to
- the way the processors are split in the various parallelization
scheme (npool, nband,ntask, ndiag). this is written at the beginning of
the output. openmp parallelization can also be enabled. it does not
always help.
- the dimensions of your system (number of bands, number of
planevawes, fft grid dimensions).
- the time spent in the different routines, including the parallel
communication time. this is given at the end of your output and depends
on the speed and latency of the interconnection between processors.
A concern in the calculation might be the available RAM memory. If the
code starts swapping it's going to get very slow.
Another concern is the I/O to disk that is generally slow and even
slower in parallel. Always use local scratch areas, never write on a
remote disk.
Possibly don't write at all.
stefano
On 06/11/2015 22:18, Mofrad, Amir Mehdi (MU-Student) wrote:
>
> Dear all QE users and developers,
>
>
> I have done an scf calculation on 1 processor which took me 11h37m.
> When I ran it on 4 processors it took 5h29m. I'm running the same
> calculation on 8 processors and it has been taking 5h17m already.
> Isn't it supposed to take less than 5 hours when I'm running it on 8
> processors instead of 4 processors?
>
> I used the following command for parallelization: " *mpirun -np 8 pw.x
> -inp Siliceous-SOD.in Siliceous_SOD8out &>* *Siliceous_SOD8.screen
> </dev/null & *"
>
> I used to use "*mpirun -np 4 pw.x <inputfile> output*"**to parallelize
> before, however, it took forever (as if it were being idle).
>
> At this stage I really need to do my calculations in parallel and I
> don't know what the problem is. One thing that I'm sure is that OPENMP
> and MPI are completely and properly installed on my system.
>
>
> Any help would be thoroughly appreciated.
>
>
> Amir M. Mofrad
>
> University of Missouri
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20151106/38b80fab/attachment.html>
More information about the users
mailing list