[Pw_forum] Memory distribution problem
pchen229 at illinois.edu
Fri Feb 28 16:12:21 CET 2014
Dear Professor Giannozzi and Spiga,
I think it is memory, because the error message is like:
: 02/27/2014 14:06:20| main|zeta27|W|job 221982 exceeds job hard limit
"h_vmem" of queue (2871259136.00000 > limit:2147483648.00000) - sending
I normally used h_stak=128M, it is working fine.
On Fri, Feb 28, 2014 at 7:30 AM, Paolo Giannozzi
<paolo.giannozzi at uniud.it>wrote:
> On Thu, 2014-02-27 at 17:30 -0600, Peng Chen wrote:
> > P.S. Most of the jobs failed at the beginning of scf calculation, and
> > the length of output scf file is zero.
> are you sure the problem is the size of the RAM and not the size of
> the stack?
> > On Thu, Feb 27, 2014 at 5:09 PM, Peng Chen <pchen229 at illinois.edu>
> > wrote:
> > Dear QE users,
> > Recently, our workstation is updated and there is a hard limit
> > on memory (2G per core). Some of QE jobs are constantly failed
> > (not always) because one of the MPI processes exceeded the RAM
> > limit and was killed. I am wondering if there is a way to
> > distribute using memory more evenly in every core.
> > _______________________________________________
> > Pw_forum mailing list
> > Pw_forum at pwscf.org
> > http://pwscf.org/mailman/listinfo/pw_forum
> Paolo Giannozzi, Dept. Chemistry&Physics&Environment,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
> Pw_forum mailing list
> Pw_forum at pwscf.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the users