[Pw_forum] Pw_forum Digest, Vol 27, Issue 78

wangqj1 wangqj1 at 126.com
Thu Sep 24 14:24:33 CEST 2009


>Message: 3
>Date: Thu, 24 Sep 2009 08:52:09 +0200
>From: Gabriele Sclauzero <sclauzer at sissa.it>
>Subject: Re: [Pw_forum] Pw_forum Digest, Vol 27, Issue 76
>To: PWSCF Forum <pw_forum at pwscf.org>
>Message-ID: <4ABB1719.7090605 at sissa.it>
>Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>
>
>wangqj1 wrote:
>>  >Message: 2
>>  >Date: Wed, 23 Sep 2009 15:06:07 +0200
>>  >From: "Lorenzo Paulatto" <paulatto at sissa.it>
>>  >Subject: Re: [Pw_forum] Pw_forum Digest, Vol 27, Issue 74
>>  >To: "PWSCF Forum" <pw_forum at pwscf.org>
>>  >Message-ID: <op.u0pn0hl7a8x26q at paulax>
>>  >Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes
>>  >
>>  >In data 23 settembre 2009 alle ore 14:12:53, wangqj1 <wangqj1 at 126.com> ha  
>>  >scritto:
>>  >>> /home/wang/bin/pw.x -npool 8 -in ZnO.pw.inp>ZnO.pw.out
>>  >> It only print the output heander and not have iteration .
>> 
>>>Maybe it's not reading the input at all! You should specify the full path  
>>>to you input file, e.g.
>>>   mpirun ... bin/pw.x -in /where/is/is/ZnO.pw.inp
>>>just to be sure.
>> 
>> The output file is as following:
>>  Program PWSCF     v.4.0.1  starts ...
>> 
>>      Today is  24Sep2009 at  8:49:30
>>       Parallel version (MPI)
>>       Number of processors in use:       8
>>      R & G space division:  proc/pool =    8
>>       For Norm-Conserving or Ultrasoft (Vanderbilt) Pseudopotentials or PAW
>>  .....................................................................
>>       Initial potential from superposition of free atoms
>>       starting charge  435.99565, renormalised to  436.00000
>>       Starting wfc are  254 atomic +    8 random wfc
>> 
>> It seems that it had read the input file  .
>
>I think so. Maybe you have problems with lack of memory or very slow disk access (as 
>already pointed by previous responses). Have you tried running with smaller system, or 
>increasing the number of processors, or using local scratch file system,...?

I had running with smaller system use  R & G space division,it runs quickly and soon finished the task .When I run bigger system (48 atoms),it turns up the phenomena as I said last letter.
>> When use K-point parallel ,it runs well.
>
>Can you specify better the conditions on which it runs well? What does it mean that you 
>use "K-point parallel"?
 
I mean only I use K-points division,the big system can runs ,but very slow.
>
>> 
>>>
>>>> I am not the supperuser,I don't know how to Set the environment variable  
>>>> OPEN_MP_THREADS to 1,I can't find where is OPEN_MP_THREADS ?
>>>
>>>You don't need to be super-user (nor supper-user ;-) to set an environment  
>>>variable, you only have to add
>>>  export OMP_NUM_THREADS=1
>>>in your PBS script, before calling mpirun. To be sure it's propagated to  
>>>all the processors add the option
>>>  -x OMP_NUM_THREAD
>>>to mpirun arguments (BEFORE pw.x).
>> I want to know whether it is necessary to set environment variable OPEN_MP_THREADS (I installed MPICH2 not OPENMPI) ?
>> 
>
>In case of doubt (like yours), always specify it. It makes no harm, and it ensures that 
>you're not running into troubles because of unwanted threads proliferation.
>
>HTH
>
>GS
>
>
>P.S.:Please, always sign your mails and put affiliation. It is not difficult to let it be 
>done  automatically by your (web)mail program.
>Personally, I don't even like seeing mail which use the username as display name in the 
>mail header, instead of the full (real) name... but that's my opinion.
>
Thank you !
Best regard 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20090924/765882c4/attachment.html>


More information about the users mailing list