<DIV>>Message: 3<BR>>Date: Thu, 24 Sep 2009 08:52:09 +0200<BR>>From: Gabriele Sclauzero <<A href="mailto:sclauzer@sissa.it">sclauzer@sissa.it</A>><BR>>Subject: Re: [Pw_forum] Pw_forum Digest, Vol 27, Issue 76<BR>>To: PWSCF Forum <<A href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</A>><BR>>Message-ID: <<A href="mailto:4ABB1719.7090605@sissa.it">4ABB1719.7090605@sissa.it</A>><BR>>Content-Type: text/plain; charset=ISO-8859-1; format=flowed<BR>><BR>><BR>><BR>>wangqj1 wrote:<BR>>> >Message: 2<BR>>> >Date: Wed, 23 Sep 2009 15:06:07 +0200<BR>>> >From: "Lorenzo Paulatto" <<A href="mailto:paulatto@sissa.it">paulatto@sissa.it</A>><BR>>> >Subject: Re: [Pw_forum] Pw_forum Digest, Vol 27, Issue 74<BR>>> >To: "PWSCF Forum" <<A href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</A>><BR>>> >Message-ID: <op.u0pn0hl7a8x26q@paulax><BR>>> >Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes<BR>>> ><BR>>> >In data 23 settembre 2009 alle ore 14:12:53, wangqj1 <<A href="mailto:wangqj1@126.com">wangqj1@126.com</A>> ha <BR>>> >scritto:<BR>>> >>> /home/wang/bin/pw.x -npool 8 -in ZnO.pw.inp>ZnO.pw.out<BR>>> >> It only print the output heander and not have iteration .<BR>>> <BR>>>>Maybe it's not reading the input at all! You should specify the full path <BR>>>>to you input file, e.g.<BR>>>> mpirun ... bin/pw.x -in /where/is/is/ZnO.pw.inp<BR>>>>just to be sure.<BR>>> <BR>>> The output file is as following:<BR>>> Program PWSCF v.4.0.1 starts ...<BR>>> <BR>>> Today is 24Sep2009 at 8:49:30<BR>>> Parallel version (MPI)<BR>>> Number of processors in use: 8<BR>>> R & G space division: proc/pool = 8<BR>>> For Norm-Conserving or Ultrasoft (Vanderbilt) Pseudopotentials or PAW<BR>>> .....................................................................<BR>>> Initial potential from superposition of free atoms<BR>>> starting charge 435.99565, renormalised to 436.00000<BR>>> Starting wfc are 254 atomic + 8 random wfc<BR>>> <BR>>> It seems that it had read the input file .<BR>><BR>>I think so. Maybe you have problems with lack of memory or very slow disk access (as <BR>>already pointed by previous responses). Have you tried running with smaller system, or <BR>>increasing the number of processors, or using local scratch file system,...?</DIV>
<DIV><BR>I had running with smaller system use R & G space division,it runs quickly and soon finished the task .When I run bigger system (48 atoms),it turns up the phenomena as I said last letter.<BR>>> When use K-point parallel ,it runs well.<BR>><BR>>Can you specify better the conditions on which it runs well? What does it mean that you <BR>>use "K-point parallel"?</DIV>
<DIV> </DIV>
<DIV>I mean only I use K-points division,the big system can runs ,but very slow.<BR>><BR>>> <BR>>>><BR>>>>> I am not the supperuser,I don't know how to Set the environment variable <BR>>>>> OPEN_MP_THREADS to 1,I can't find where is OPEN_MP_THREADS ?<BR>>>><BR>>>>You don't need to be super-user (nor supper-user ;-) to set an environment <BR>>>>variable, you only have to add<BR>>>> export OMP_NUM_THREADS=1<BR>>>>in your PBS script, before calling mpirun. To be sure it's propagated to <BR>>>>all the processors add the option<BR>>>> -x OMP_NUM_THREAD<BR>>>>to mpirun arguments (BEFORE pw.x).<BR>>> I want to know whether it is necessary to set environment variable OPEN_MP_THREADS (I installed MPICH2 not OPENMPI) ?<BR>>> <BR>><BR>>In case of doubt (like yours), always specify it. It makes no harm, and it ensures that <BR>>you're not running into troubles because of unwanted threads proliferation.<BR>><BR>>HTH<BR>><BR>>GS<BR>><BR>><BR>>P.S.:Please, always sign your mails and put affiliation. It is not difficult to let it be <BR>>done automatically by your (web)mail program.<BR>>Personally, I don't even like seeing mail which use the username as display name in the <BR>>mail header, instead of the full (real) name... but that's my opinion.<BR>><BR>Thank you !</DIV>
<DIV>Best regard </DIV><br><br><span title="neteasefooter"/><hr/>
<a href="http://news.163.com/madeinchina/index.html?from=mailfooter">"中国制造",讲述中国60年往事</a>
</span>