<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Dear Wolfgang,<div><br><div><div>Il giorno 31/mag/10, alle ore 09:53, Wolfgang Gehricht ha scritto:</div><br class="Apple-interchange-newline"><blockquote type="cite">Dear group!<br><br>I am experiencing the following problem with pw.x. When I run a job on a single core with 8 processors, the calculation does not exceed the available RAM, i.e. it works. When I run the same job on two cores, the calculation crashes (mpierun: Forwarding signal 12 to job) [relevant log-parts see below]. However, I can compute this job on two cores with a smaller k-point sampling, hence I suspect that it has to do somehow with the memory demands/distribution. I am using the "Davidson iterative diagonalization" as a minimizer, with just settings for the thresholds (convergence, 1st iterative diagonalization).<br> Can you please point me into the right direction? <br>With thanks<br>Yours Wolfgang<br>---<br>Parallel version (MPI)<br><br>Number of processors in use: 16<br>R & G space division: proc/pool = 16<br></blockquote><div><br></div><div>It's not a good idea to run 16 processes on 8 cores... even worst on 2 cores! The data is distributed among the processes, so the more processes in the R&G pool, the smaller amount of memory needed per processes. Anyway, if you run all processes on the same core, you are not distributing the data *physically*. Moreover these processes will tend to stomp on each other's feet. </div><div><br></div><div>About the k-point sampling (assuming that you don't use parallelization over k-points, i.e. npool=1): the largest data arrays are present in the main memory only for one k-point at a time (unless you use options like disk_io='none' or so), so that incresing the number of k-points will increase the computation time almost linearly, but will leave the memory consumption almost unchanged.</div><div><br></div><div>If the problem is with the parallel Davidson diagonalization, you may try to disable it with ndiag=1. </div><div><br></div><div>All that said, from the memory usage estimate reported below, it looks like you're not running such a big system... nothing that couldn't be run on a laptop with 2GB of memory.</div><div><br></div><div><br></div><div>HTH</div><div><br></div><div><br></div><div>GS</div><div><br></div><blockquote type="cite">...<br>Subspace diagonalization in iterative solution of the eigenvalue problem:<br> a parallel distributed memory algorithm will be used,<br>eigenstates matrixes will be distributed block like on<br>ortho sub-group = 4* 4 procs<br>...<br>Planes per process (thick) : nr3 = 90 npp = 6 ncplane = 8100<br><br> Proc/ planes cols G planes cols G columns G<br>Pool (dense grid) (smooth grid) (wavefct grid)<br>1 6 339 18817 6 339 18817 92 2668<br>2 6 339 18817 6 339 18817 92 2668<br>3 6 339 18817 6 339 18817 92 2668<br>4 6 338 18812 6 338 18812 92 2668<br> 5 6 338 18812 6 338 18812 93 2669<br>6 6 338 18812 6 338 18812 93 2669<br>7 6 338 18812 6 338 18812 93 2669<br>8 6 338 18812 6 338 18812 93 2669<br>9 6 338 18812 6 338 18812 93 2669<br>10 6 338 18812 6 338 18812 93 2669<br> 11 5 339 18817 5 339 18817 93 2669<br>12 5 339 18815 5 339 18815 93 2669<br>13 5 339 18815 5 339 18815 93 2669<br>14 5 339 18815 5 339 18815 92 2666<br>15 5 339 18815 5 339 18815 92 2666<br>16 5 339 18815 5 339 18815 92 2666<br> tot 90 5417 301027 90 5417 301027 1481 42691<br>...<br>G cutoff = 1729.8995 ( 301027 G-vectors) FFT grid: ( 90, 90, 90)<br><br>Largest allocated arrays est. size (Mb) dimensions<br>Kohn-Sham Wavefunctions 4.63 Mb ( 2373, 128)<br> NL pseudopotentials 9.12 Mb ( 2373, 252)<br>Each V/rho on FFT grid 1.48 Mb ( 48600, 2)<br>Each G-vector array 0.14 Mb ( 18817)<br>G-vector shells 0.01 Mb ( 1370)<br>Largest temporary arrays est. size (Mb) dimensions<br>Auxiliary wavefunctions 18.54 Mb ( 2373, 512)<br> Each subspace H/S matrix 4.00 Mb ( 512, 512)<br>Each <psi_i|beta_j> matrix 0.49 Mb ( 252, 128)<br>Arrays for rho mixing 5.93 Mb ( 48600, 8)<br><br>Initial potential from superposition of free atoms<br>16 total processes killed (some possibly by mpirun during cleanup)<br> _______________________________________________<br>Pw_forum mailing list<br><a href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a><br>http://www.democritos.it/mailman/listinfo/pw_forum<br></blockquote></div><br><div> <span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div><span class="Apple-style-span" style="color: rgb(126, 126, 126); font-size: 16px; font-style: italic; "><br class="Apple-interchange-newline">§ Gabriele Sclauzero, EPFL SB ITP CSEA</span></div><div><font class="Apple-style-span" color="#7E7E7E"><i> PH H2 462, Station 3, CH-1015 Lausanne</i></font></div></span> </div><br></div></body></html>