<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>I am trying to run a 1152 atom, 2560 electron pw MD system on a BG/P, and I believe I am running up against memory issues (not a surprise...)- but I am not exactly sure how to debug & solve the issue. I am trying to run on 1024 procs (I've tried in smp, dual and vn mode), though I guess I may have to go higher - but I am not certain yet.</div><div><br></div><div>I have kept npools =1 , nimage = 1, since the didn't seem applicable to my run. I have tried varying ntg to 1, 2 & 32 and kept ndiag to the default.</div><div><br></div><div>While varying the number of taskgroups (ntg), I would get output like the following:</div><div><div>Parallel version (MPI)</div><div><br></div><div> Number of processors in use: 1024</div><div> R & G space division: proc/pool = 1024</div><div> wavefunctions fft division: fft/group = 2</div><div><br></div><div> For Norm-Conserving or Ultrasoft (Vanderbilt) Pseudopotentials or PAW</div><div><br></div><div> Current dimensions of program pwscf are:</div><div> Max number of different atomic species (ntypx) = 10</div><div> Max number of k-points (npk) = 40000</div><div> Max angular momentum in pseudopotentials (lmaxx) = 3</div><div><br></div><div> gamma-point specific algorithms are used</div><div><br></div><div> Iterative solution of the eigenvalue problem</div><div><br></div><div> a parallel distributed memory algorithm will be used,</div><div> eigenstates matrixes will be distributed block like on</div><div> ortho sub-group = 32* 32 procs</div><div><br></div><div> Message from routine data_structure:</div><div> some processors have no planes </div><div> Message from routine data_structure:</div><div> some processors have no smooth planes</div><div><br></div></div><div>Planes per process (thick) : nr3 =480 npp = 1 ncplane =*****</div><div><br></div><div><div>Proc/ planes cols G planes cols G columns G</div><div> Pool (dense grid) (smooth grid) (wavefct grid)</div><div> 1 1 162 50122 1 162 50122 42 6294</div><div> 2 0 162 50122 0 162 50122 42 6294</div><div> 3 1 162 50122 1 162 50122 42 6294</div><div> 4 0 162 50122 0 162 50122 42 6294</div><div> 5 1 162 50122 1 162 50122 42 6294</div><div>(continues similarly for each of the 1024 procs)</div><div><br></div><div>So the number of FFT planes that need to be distributed is 480. The output below that made it seem like there were processors that which still weren't taking part in the calculation, and presumably weren't helping out in the distribution of the data. </div><div><br></div><div>My understanding is that the processors of each taskgroup would take part in the FFT calculation for the plane associated with the task group. So my first question is - is the fact that some procs don't have planes in my output actually an issue?</div></div><div><br></div><div><br></div><div>the output continues and the run finally dies here:</div><div><br></div><div><div> Largest allocated arrays est. size (Mb) dimensions</div><div> Kohn-Sham Wavefunctions 73.76 Mb ( 3147,1536)</div><div> NL pseudopotentials 227.42 Mb ( 3147,4736)</div><div> Each V/rho on FFT grid 3.52 Mb ( 230400)</div><div> Each G-vector array 0.19 Mb ( 25061)</div><div> G-vector shells 0.08 Mb ( 10422)</div><div> Largest temporary arrays est. size (Mb) dimensions</div><div> Auxiliary wavefunctions 73.76 Mb ( 3147,3072)</div><div> Each subspace H/S matrix 72.00 Mb ( 3072,3072)</div><div> Each <psi_i|beta_j> matrix 55.50 Mb ( 4736,1536)</div><div> Arrays for rho mixing 28.12 Mb ( 230400, 8)</div><div><br></div><div> Initial potential from superposition of free atoms</div><div> Check: negative starting charge= -7.401460</div><div><br></div><div> starting charge 2556.45492, renormalised to 2560.00000</div><div><br></div><div> negative rho (up, down): 0.741E+01 0.000E+00</div><div> Starting wfc are 2944 atomic wfcs</div><div><br></div><div> total cpu time spent up to now is 704.01 secs</div><div><br></div><div> per-process dynamical memory: 13.6 Mb</div><div><br></div><div> Self-consistent Calculation</div><div><br></div><div> iteration # 1 ecut= 38.22 Ry beta=0.70</div><div> Davidson diagonalization with overlap</div><div> ethr = 1.00E-02, avg # of iterations = 2.0</div><div>process group 2362 has completed</div><div><br></div></div><div>with an like this in the stderr file:</div><div><br></div><div><div>Abort(1) on node 210 (rank 210 in comm 1140850688): Fatal error in MPI_Scatterv: Other MPI error, error sta</div><div>ck:</div><div>MPI_Scatterv(360): MPI_Scatterv(sbuf=0x36c02010, scnts=0x7fffa940, displs=0x7fffb940, MPI_DOUBLE_PRECISION,</div><div> rbuf=0x4b83010, rcount=230400, MPI_DOUBLE_PRECISION, root=0, comm=0x84000002) failed</div><div>MPI_Scatterv(100): Out of memory</div><div><br></div><div>So I figure I am running out of memory on a node at some point... but not entirely sure where (seems to be in the first electronic step) or how to get around it.</div><div><br></div><div>Any help would be appreciated.</div><div><br></div><div>Dave</div><div><br></div><div><br></div></div><div><br></div><br><div apple-content-edited="true"> <span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-size: 14px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><font face="Helvetica" size="3" style="font: normal normal normal 12px/normal Helvetica; font-size: 12px; ">David E. Farrell</font></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><font class="Apple-style-span" size="3"><span class="Apple-style-span" style="font-size: 12px; ">Post-Doctoral Fellow</span></font></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><font class="Apple-style-span" size="3"><span class="Apple-style-span" style="font-size: 12px; ">Department of Materials Science and Engineering</span></font></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><font face="Helvetica" size="3" style="font: normal normal normal 12px/normal Helvetica; font-size: 12px; ">Northwestern University</font></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><font face="Helvetica" size="3" style="font: normal normal normal 12px/normal Helvetica; font-size: 12px; ">email: <a href="mailto:d-farrell2@northwestern.edu">d-farrell2@northwestern.edu</a></font></div></div></div></span> </div><br></body></html>