[Pw_forum] insufficient virtual memory
Gabriele Sclauzero
sclauzer at sissa.it
Thu Aug 20 09:01:10 CEST 2009
Lorenzo Paulatto wrote:
> In data 19 agosto 2009 alle ore 15:15:55, Raji Abdulrafiu
> <tunderaji at gmail.com> ha scritto:
>> Despite all these steps, the error persists, and
>> I am just not sure of what to do again. I need help please!. I am
>> using ver.4.0 of QE, and the calculation is being done using the
>> CINECA parallel computers.
>
> Dear Raji,
> the memory usage of projwfc.x only depends on the wfc cutoff, all the
> other chages were probably irrelevant.
>
> The code loads the wavefunctions for all bands from one k point at a time,
> and for all atomic wavefunctions at the same time. The amount of required
> memory can increase fast with the number of atoms and with the number of
> bands. Furthermore that code is not parallelized (at the moment), this
> could cause some problem.
I agree with all above, but I'm pretty sure that projwfc has been parallelized over G
vectors and even over pools (although the latter does not distribute memory). It has not
been parallelized with tasks, so that each pool works on all the bands.
If the problem is actually a memory issue (but in my experience if the scf/nscf run worked
on the same number of nodes, the PDOS calculations should work as well, since I believe it
uses less memory than the corresponding pw.x calculation), you can specify
wf_collect=.TRUE. in the pw.x run (in scf you don't need to repeat the self-consistency
cycle, simply do a restart calculation) and then run projwfc.x on a larger number of
processors (at fixed number of pools!), larger than whatever used for the pw.x run.
In this was you can distribute more memory via R&G space parallelization.
>
> I would advice you to request an entire node (4 processors) for the
> calculation and than only use one processor, in this way you'll have 8GB
> of RAM all for yourself.
This is also a good solution. Anyway, if you still want to run parallel and need more than
one node, you have to be sure that the MPI implementation is distributing one process per
node.
HTH
GS
If it still does not work, please ask again
> providing more detail about your system, and *important* the version of QE
> you are using.
>
> best regards
>
>
--
o ------------------------------------------------ o
| Gabriele Sclauzero, PhD Student |
| c/o: SISSA & CNR-INFM Democritos, |
| via Beirut 2-4, 34014 Trieste (Italy) |
| email: sclauzer at sissa.it |
| phone: +39 040 3787 511 |
| skype: gurlonotturno |
o ------------------------------------------------ o
More information about the users
mailing list