[Pw_forum] Pw_forum Digest, Vol 27, Issue 76
Gabriele Sclauzero
sclauzer at sissa.it
Thu Sep 24 08:52:09 CEST 2009
wangqj1 wrote:
> >Message: 2
> >Date: Wed, 23 Sep 2009 15:06:07 +0200
> >From: "Lorenzo Paulatto" <paulatto at sissa.it>
> >Subject: Re: [Pw_forum] Pw_forum Digest, Vol 27, Issue 74
> >To: "PWSCF Forum" <pw_forum at pwscf.org>
> >Message-ID: <op.u0pn0hl7a8x26q at paulax>
> >Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes
> >
> >In data 23 settembre 2009 alle ore 14:12:53, wangqj1 <wangqj1 at 126.com> ha
> >scritto:
> >>> /home/wang/bin/pw.x -npool 8 -in ZnO.pw.inp>ZnO.pw.out
> >> It only print the output heander and not have iteration .
>
>>Maybe it's not reading the input at all! You should specify the full path
>>to you input file, e.g.
>> mpirun ... bin/pw.x -in /where/is/is/ZnO.pw.inp
>>just to be sure.
>
> The output file is as following:
> Program PWSCF v.4.0.1 starts ...
>
> Today is 24Sep2009 at 8:49:30
> Parallel version (MPI)
> Number of processors in use: 8
> R & G space division: proc/pool = 8
> For Norm-Conserving or Ultrasoft (Vanderbilt) Pseudopotentials or PAW
> .....................................................................
> Initial potential from superposition of free atoms
> starting charge 435.99565, renormalised to 436.00000
> Starting wfc are 254 atomic + 8 random wfc
>
> It seems that it had read the input file .
I think so. Maybe you have problems with lack of memory or very slow disk access (as
already pointed by previous responses). Have you tried running with smaller system, or
increasing the number of processors, or using local scratch file system,...?
> When use K-point parallel ,it runs well.
Can you specify better the conditions on which it runs well? What does it mean that you
use "K-point parallel"?
>
>>
>>> I am not the supperuser,I don't know how to Set the environment variable
>>> OPEN_MP_THREADS to 1,I can't find where is OPEN_MP_THREADS ?
>>
>>You don't need to be super-user (nor supper-user ;-) to set an environment
>>variable, you only have to add
>> export OMP_NUM_THREADS=1
>>in your PBS script, before calling mpirun. To be sure it's propagated to
>>all the processors add the option
>> -x OMP_NUM_THREAD
>>to mpirun arguments (BEFORE pw.x).
> I want to know whether it is necessary to set environment variable OPEN_MP_THREADS (I installed MPICH2 not OPENMPI) ?
>
In case of doubt (like yours), always specify it. It makes no harm, and it ensures that
you're not running into troubles because of unwanted threads proliferation.
HTH
GS
P.S.:Please, always sign your mails and put affiliation. It is not difficult to let it be
done automatically by your (web)mail program.
Personally, I don't even like seeing mail which use the username as display name in the
mail header, instead of the full (real) name... but that's my opinion.
--
o ------------------------------------------------ o
| Gabriele Sclauzero, PhD Student |
| c/o: SISSA & CNR-INFM Democritos, |
| via Beirut 2-4, 34014 Trieste (Italy) |
| email: sclauzer at sissa.it |
| phone: +39 040 3787 511 |
| skype: gurlonotturno |
o ------------------------------------------------ o
More information about the users
mailing list