[Pw_forum] Job crashes on multiple nodes

Wolfgang Gehricht gehricht at googlemail.com
Mon May 31 09:53:51 CEST 2010


Dear group!

I am experiencing the following problem with pw.x. When I run a job on a
single core with 8 processors, the calculation does not exceed the available
RAM, i.e. it works. When I run the same job on two cores, the calculation
crashes (mpierun: Forwarding signal 12 to job) [relevant log-parts see
below]. However, I can compute this job on two cores with a smaller k-point
sampling, hence I suspect that it has to do somehow with the memory
demands/distribution. I am using the "Davidson iterative diagonalization" as
a minimizer, with just settings for the thresholds (convergence, 1st
iterative diagonalization).
Can you please point me into the right direction?
With thanks
Yours Wolfgang
---
Parallel version (MPI)

Number of processors in use: 16
R & G space division: proc/pool = 16
...
Subspace diagonalization in iterative solution of the eigenvalue problem:
a parallel distributed memory algorithm will be used,
eigenstates matrixes will be distributed block like on
ortho sub-group = 4* 4 procs
...
Planes per process (thick) : nr3 = 90 npp = 6 ncplane = 8100

Proc/ planes cols G planes cols G columns G
Pool (dense grid) (smooth grid) (wavefct grid)
1 6 339 18817 6 339 18817 92 2668
2 6 339 18817 6 339 18817 92 2668
3 6 339 18817 6 339 18817 92 2668
4 6 338 18812 6 338 18812 92 2668
5 6 338 18812 6 338 18812 93 2669
6 6 338 18812 6 338 18812 93 2669
7 6 338 18812 6 338 18812 93 2669
8 6 338 18812 6 338 18812 93 2669
9 6 338 18812 6 338 18812 93 2669
10 6 338 18812 6 338 18812 93 2669
11 5 339 18817 5 339 18817 93 2669
12 5 339 18815 5 339 18815 93 2669
13 5 339 18815 5 339 18815 93 2669
14 5 339 18815 5 339 18815 92 2666
15 5 339 18815 5 339 18815 92 2666
16 5 339 18815 5 339 18815 92 2666
tot 90 5417 301027 90 5417 301027 1481 42691
...
G cutoff = 1729.8995 ( 301027 G-vectors) FFT grid: ( 90, 90, 90)

Largest allocated arrays est. size (Mb) dimensions
Kohn-Sham Wavefunctions 4.63 Mb ( 2373, 128)
NL pseudopotentials 9.12 Mb ( 2373, 252)
Each V/rho on FFT grid 1.48 Mb ( 48600, 2)
Each G-vector array 0.14 Mb ( 18817)
G-vector shells 0.01 Mb ( 1370)
Largest temporary arrays est. size (Mb) dimensions
Auxiliary wavefunctions 18.54 Mb ( 2373, 512)
Each subspace H/S matrix 4.00 Mb ( 512, 512)
Each <psi_i|beta_j> matrix 0.49 Mb ( 252, 128)
Arrays for rho mixing 5.93 Mb ( 48600, 8)

Initial potential from superposition of free atoms
16 total processes killed (some possibly by mpirun during cleanup)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20100531/7c9a054e/attachment.html>


More information about the users mailing list