<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content="text/html; charset=utf-8" http-equiv=Content-Type>
<META name=GENERATOR content="MSHTML 8.00.6001.19019">
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT size=2 face=Arial>Alexander,</FONT></DIV>
<DIV><FONT size=2 face=Arial></FONT> </DIV>
<DIV><FONT size=2 face=Arial>According to your reply to my message, you actually
applied 64 CPU cores </FONT></DIV>
<DIV><FONT size=2 face=Arial>(16 nodes, 4 cores per node), this should have no
problem unless the policy</FONT></DIV>
<DIV><FONT size=2 face=Arial>of using your cluster prohibited it. Once upon a
time, we had such a policy</FONT></DIV>
<DIV><FONT size=2 face=Arial>on our cluster: an job occupies at most 32 CPU
cores, otherwise put it into</FONT></DIV>
<DIV><FONT size=2 face=Arial>sequential queue.</FONT></DIV>
<DIV><FONT size=2 face=Arial></FONT> </DIV>
<DIV><FONT size=2 face=Arial>Maybe, you should ask your administrator whether
there is such a policy ...</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=Arial>zhou huiqun</FONT></DIV>
<DIV><FONT size=2 face=Arial>@earth sciences, nanjing university,
china</FONT></DIV>
<DIV> </DIV>
<BLOCKQUOTE
style="BORDER-LEFT: #000000 2px solid; PADDING-LEFT: 5px; PADDING-RIGHT: 0px; MARGIN-LEFT: 5px; MARGIN-RIGHT: 0px">
<DIV style="FONT: 10pt arial">----- Original Message ----- </DIV>
<DIV
style="FONT: 10pt arial; BACKGROUND: #e4e4e4; font-color: black"><B>From:</B>
<A title=agkvashnin@gmail.com href="mailto:agkvashnin@gmail.com">Alexander G.
Kvashnin</A> </DIV>
<DIV style="FONT: 10pt arial"><B>To:</B> <A title=pw_forum@pwscf.org
href="mailto:pw_forum@pwscf.org">PWSCF Forum</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Sent:</B> Tuesday, March 08, 2011 12:24
AM</DIV>
<DIV style="FONT: 10pt arial"><B>Subject:</B> Re: [Pw_forum] НА: Re: problem
in MPI running of QE (16 processors)</DIV>
<DIV><BR></DIV>
<DIV>Dear all<BR></DIV>
<DIV><BR></DIV>
<DIV>I tried to use full paths, but it didn't give positive results. It wrote
an error message </DIV>
<DIV><BR></DIV>
<DIV>application called MPI_Abort(MPI_COMM_WORLD, 0) - process 0</DIV>
<DIV><BR></DIV><BR>
<DIV class=gmail_quote>On 7 March 2011 10:30, Alexander Kvashnin <SPAN
dir=ltr><<A
href="mailto:agkvashnin@gmail.com">agkvashnin@gmail.com</A>></SPAN>
wrote:<BR>
<BLOCKQUOTE
style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex"
class=gmail_quote>
<DIV>
<DIV class=h5>Thanks, I tried to use "<" instead of "-in" it also didn't
work.<BR>OK,I will try to use full paths for input and output, and answer
about result.<BR><BR>----- Исходное сообщение -----<BR>От: Omololu Akin-Ojo
<<A
href="mailto:prayerz.omo@gmail.com">prayerz.omo@gmail.com</A>><BR>Отправлено:
7 марта 2011 г. 9:56<BR>Кому: PWSCF Forum <<A
href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</A>><BR>Тема: Re:
[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)<BR><BR>Try
to see if specifying the full paths help.<BR>E.g., try something
like:<BR><BR>mpiexec /home/MyDir/bin/pw.x -in /scratch/MyDir/graph.inp
><BR>/scratch/MyDir/graph.out<BR><BR>(where /home/MyDir/bin is the full
path to your pw.x and<BR>/scratch/MyDir/graph.inp is the full path to your
output ....)<BR><BR>( I see you use "-in" instead of "<" to indicate the
input. I don't<BR>know too much but _perhaps_ you could also _try_ using
"<" instead of<BR>"-in") .<BR><BR>o.<BR><BR>On Mon, Mar 7, 2011 at 7:31
AM, Alexander Kvashnin <<A
href="mailto:agkvashnin@gmail.com">agkvashnin@gmail.com</A>>
wrote:<BR>> Yes, I wrote<BR>><BR>> #PBS -l
nodes=16:ppn=4<BR>><BR>> And in userguide of MIPT-60 wrote,that
mpiexec must choose number of<BR>> processors automatically, that's why I
didn't write anything else<BR>><BR>><BR>>
________________________________<BR>> От: Huiqun Zhou <<A
href="mailto:hqzhou@nju.edu.cn">hqzhou@nju.edu.cn</A>><BR>>
Отправлено: 7 марта 2011 г. 7:52<BR>> Кому: PWSCF
Forum <<A
href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</A>><BR>>
Тема: Re: [Pw_forum] problem in MPI running of QE (16
processors)<BR>><BR>> How did you apply number of node, procs per node
in your job<BR>> script?<BR>><BR>> #PBS -l
nodes=?:ppn=?<BR>><BR>> zhou huiqun<BR>> @earth sciences, nanjing
university, china<BR>><BR>><BR>> ----- Original Message
-----<BR>> From: Alexander G. Kvashnin<BR>> To: PWSCF Forum<BR>>
Sent: Saturday, March 05, 2011 2:53 AM<BR>> Subject: Re: [Pw_forum]
problem in MPI running of QE (16 processors)<BR>> I create PBS task on
supercomputer MIPT-60 where I write<BR>><BR>> mpiexec
../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt<BR>> all
other<BR><BR>[Включен не весь текст исходного
сообщения]</DIV></DIV></BLOCKQUOTE></DIV><BR><BR clear=all><BR>-- <BR>
<DIV>Sincerely yours</DIV>
<DIV>Alexander G. Kvashnin</DIV>
<DIV>--------------------------------------------------------------------------------------------------------------------------------</DIV>
<DIV>Student</DIV>
<DIV>Moscow Institute of Physics and Technology
<A href="http://mipt.ru/" target=_blank>http://mipt.ru/</A></DIV>
<DIV>141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia<BR></DIV>
<DIV><BR></DIV>
<DIV>Junior research scientist<BR></DIV>
<DIV>Technological Institute for Superhard </DIV>
<DIV>and Novel Carbon Materials
<A href="http://www.ntcstm.troitsk.ru/"
target=_blank>http://www.ntcstm.troitsk.ru/</A></DIV>
<DIV>142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia</DIV>
<DIV>================================================================</DIV><BR>
<P>
<HR>
<P></P>_______________________________________________<BR>Pw_forum mailing
list<BR>Pw_forum@pwscf.org<BR>http://www.democritos.it/mailman/listinfo/pw_forum<BR></BLOCKQUOTE></BODY></HTML>