<div>Previously I used also 16 nodes, when I calculate with ABINIT and there is no problem for its running.</div><div>I asked my administrator about it, he said that anything alright with policy.<br></div><br><div class="gmail_quote">
On 8 March 2011 07:48, Huiqun Zhou <span dir="ltr"><<a href="mailto:hqzhou@nju.edu.cn">hqzhou@nju.edu.cn</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div bgcolor="#ffffff">
<div><font size="2" face="Arial">Alexander,</font></div>
<div><br></div>
<div><font size="2" face="Arial">According to your reply to my message, you actually
applied 64 CPU cores </font></div>
<div><font size="2" face="Arial">(16 nodes, 4 cores per node), this should have no
problem unless the policy</font></div>
<div><font size="2" face="Arial">of using your cluster prohibited it. Once upon a
time, we had such a policy</font></div>
<div><font size="2" face="Arial">on our cluster: an job occupies at most 32 CPU
cores, otherwise put it into</font></div>
<div><font size="2" face="Arial">sequential queue.</font></div>
<div> </div>
<div><font size="2" face="Arial">Maybe, you should ask your administrator whether
there is such a policy ...</font></div><div class="im">
<div> </div>
<div><font size="2" face="Arial">zhou huiqun</font></div>
<div><font size="2" face="Arial">@earth sciences, nanjing university,
china</font></div>
<div> </div>
</div><blockquote style="border-left:#000000 2px solid;padding-left:5px;padding-right:0px;margin-left:5px;margin-right:0px"><div class="im">
<div style="font:10pt arial">----- Original Message ----- </div>
<div style="font:10pt arial;background:#e4e4e4"><b>From:</b>
<a title="agkvashnin@gmail.com" href="mailto:agkvashnin@gmail.com" target="_blank">Alexander G.
Kvashnin</a> </div>
<div style="font:10pt arial"><b>To:</b> <a title="pw_forum@pwscf.org" href="mailto:pw_forum@pwscf.org" target="_blank">PWSCF Forum</a> </div>
</div><div><div class="h5"><div style="font:10pt arial"><b>Sent:</b> Tuesday, March 08, 2011 12:24
AM</div>
<div style="font:10pt arial"><b>Subject:</b> Re: [Pw_forum] НА: Re: problem
in MPI running of QE (16 processors)</div>
<div><br></div>
<div>Dear all<br></div>
<div><br></div>
<div>I tried to use full paths, but it didn't give positive results. It wrote
an error message </div>
<div><br></div>
<div>application called MPI_Abort(MPI_COMM_WORLD, 0) - process 0</div>
<div><br></div><br>
<div class="gmail_quote">On 7 March 2011 10:30, Alexander Kvashnin <span dir="ltr"><<a href="mailto:agkvashnin@gmail.com" target="_blank">agkvashnin@gmail.com</a>></span>
wrote:<br>
<blockquote style="border-left:#ccc 1px solid;margin:0px 0px 0px 0.8ex;padding-left:1ex" class="gmail_quote">
<div>
<div>Thanks, I tried to use "<" instead of "-in" it also didn't
work.<br>OK,I will try to use full paths for input and output, and answer
about result.<br><br>----- Исходное сообщение -----<br>От: Omololu Akin-Ojo
<<a href="mailto:prayerz.omo@gmail.com" target="_blank">prayerz.omo@gmail.com</a>><br>Отправлено:
7 марта 2011 г. 9:56<br>Кому: PWSCF Forum <<a href="mailto:pw_forum@pwscf.org" target="_blank">pw_forum@pwscf.org</a>><br>Тема: Re:
[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)<br><br>Try
to see if specifying the full paths help.<br>E.g., try something
like:<br><br>mpiexec /home/MyDir/bin/pw.x -in /scratch/MyDir/graph.inp
><br>/scratch/MyDir/graph.out<br><br>(where /home/MyDir/bin is the full
path to your pw.x and<br>/scratch/MyDir/graph.inp is the full path to your
output ....)<br><br>( I see you use "-in" instead of "<" to indicate the
input. I don't<br>know too much but _perhaps_ you could also _try_ using
"<" instead of<br>"-in") .<br><br>o.<br><br>On Mon, Mar 7, 2011 at 7:31
AM, Alexander Kvashnin <<a href="mailto:agkvashnin@gmail.com" target="_blank">agkvashnin@gmail.com</a>>
wrote:<br>> Yes, I wrote<br>><br>> #PBS -l
nodes=16:ppn=4<br>><br>> And in userguide of MIPT-60 wrote,that
mpiexec must choose number of<br>> processors automatically, that's why I
didn't write anything else<br>><br>><br>>
________________________________<br>> От: Huiqun Zhou <<a href="mailto:hqzhou@nju.edu.cn" target="_blank">hqzhou@nju.edu.cn</a>><br>>
Отправлено: 7 марта 2011 г. 7:52<br>> Кому: PWSCF
Forum <<a href="mailto:pw_forum@pwscf.org" target="_blank">pw_forum@pwscf.org</a>><br>>
Тема: Re: [Pw_forum] problem in MPI running of QE (16
processors)<br>><br>> How did you apply number of node, procs per node
in your job<br>> script?<br>><br>> #PBS -l
nodes=?:ppn=?<br>><br>> zhou huiqun<br>> @earth sciences, nanjing
university, china<br>><br>><br>> ----- Original Message
-----<br>> From: Alexander G. Kvashnin<br>> To: PWSCF Forum<br>>
Sent: Saturday, March 05, 2011 2:53 AM<br>> Subject: Re: [Pw_forum]
problem in MPI running of QE (16 processors)<br>> I create PBS task on
supercomputer MIPT-60 where I write<br>><br>> mpiexec
../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt<br>> all
other<br><br>[Включен не весь текст исходного
сообщения]</div></div></blockquote></div><br><br clear="all"><br>-- <br>
<div>Sincerely yours</div>
<div>Alexander G. Kvashnin</div>
<div>--------------------------------------------------------------------------------------------------------------------------------</div>
<div>Student</div>
<div>Moscow Institute of Physics and Technology
<a href="http://mipt.ru/" target="_blank">http://mipt.ru/</a></div>
<div>141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia<br></div>
<div><br></div>
<div>Junior research scientist<br></div>
<div>Technological Institute for Superhard </div>
<div>and Novel Carbon Materials
<a href="http://www.ntcstm.troitsk.ru/" target="_blank">http://www.ntcstm.troitsk.ru/</a></div>
<div>142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia</div>
<div>================================================================</div><br>
</div></div><p>
<hr><div class="im">
_______________________________________________<br>Pw_forum mailing
list<br><a href="mailto:Pw_forum@pwscf.org" target="_blank">Pw_forum@pwscf.org</a><br><a href="http://www.democritos.it/mailman/listinfo/pw_forum" target="_blank">http://www.democritos.it/mailman/listinfo/pw_forum</a><br>
</div></p></blockquote></div>
<br>_______________________________________________<br>
Pw_forum mailing list<br>
<a href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a><br>
<a href="http://www.democritos.it/mailman/listinfo/pw_forum" target="_blank">http://www.democritos.it/mailman/listinfo/pw_forum</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br><div>Sincerely yours</div><div>Alexander G. Kvashnin</div><div>--------------------------------------------------------------------------------------------------------------------------------</div>
<div>Student</div><div>Moscow Institute of Physics and Technology <a href="http://mipt.ru/" target="_blank">http://mipt.ru/</a></div><div>141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia<br></div><div>
<br></div><div>Junior research scientist<br></div><div>Technological Institute for Superhard </div><div>and Novel Carbon Materials <a href="http://www.ntcstm.troitsk.ru/" target="_blank">http://www.ntcstm.troitsk.ru/</a></div>
<div>142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia</div><div>================================================================</div><br>