Dear Joaquim,<br><br>I suggest you do some tests using -ntg parallelization level also (<a href="http://www.quantum-espresso.org/user_guide/node18.html">http://www.quantum-espresso.org/user_guide/node18.html</a>).<br><br>
For example:<br><br>/opt/openmpi/bin/mpirun -np 64 /opt/espresso-4.1.2/bin/pw.x -npool 8 -ntg 8 < <a href="http://input.in">input.in</a> <br><br>I got a good performance using -ntg parallelization level <br><br>Bests,<br>
Andre<br><br><div class="gmail_quote">On Sat, Aug 28, 2010 at 11:10 AM, joaquin peralta <span dir="ltr"><<a href="mailto:jperaltac@gmail.com">jperaltac@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Dear André<br>
<br>
I'm embarrassed, it was always my mistake. I have scripts that<br>
generate these pbs and the option was always bad allocated for<br>
openmpi. I was using -np instead of -npool.<br>
<br>
I will try again, in order to check the performance<br>
<br>
Thank you so much<br>
<font color="#888888"><br>
Joaquin<br>
</font><div><div></div><div class="h5"><br>
On Sat, Aug 28, 2010 at 8:42 AM, Andre Martinotto<br>
<<a href="mailto:almartinotto@gmail.com">almartinotto@gmail.com</a>> wrote:<br>
> Dear Joaquin,<br>
><br>
> In principle the process of compilation appears to be correct.<br>
><br>
> What is your execution command? You are using a -npool 8? The number of<br>
> process (-np ) is equal to 64 in both cases, but in the second case I<br>
> believe that you are using a -npool 8.<br>
><br>
> For example, something like:<br>
> /opt/openmpi/bin/mpirun -np 64 /opt/espresso-4.1.2/bin/pw.x -npool 8 <<br>
> <a href="http://entrada.in" target="_blank">entrada.in</a><br>
><br>
><br>
> Best regards,<br>
> André Luis Martinotto<br>
><br>
> Andre Martinotto<br>
> Email: almartinotto at <a href="http://gmail.com" target="_blank">gmail.com</a><br>
> Computing Department<br>
> Universidade de Caxias do Sul<br>
> Caxias do Sul - RS, Brazil<br>
><br>
> On Sat, Aug 28, 2010 at 3:05 AM, joaquin peralta <<a href="mailto:jperaltac@gmail.com">jperaltac@gmail.com</a>><br>
> wrote:<br>
>><br>
>> Dear Forum<br>
>><br>
>> I compiled a couple of days ago, openmpi 1.4 and quantum espresso,<br>
>> however when I sent the job to the queue system. The nodes show that<br>
>> the command was executed with the "-np 8" option, but the output not :<br>
>><br>
>> Parallel version (MPI), running on 64 processors<br>
>> R & G space division: proc/pool = 64<br>
>><br>
>> and with openMPI show me different status :<br>
>><br>
>> Parallel version (MPI), running on 64 processors<br>
>> K-points division: npool = 8<br>
>> R & G space division: proc/pool = 8<br>
>><br>
>> I'm a little bit confused, I don't understand what I did bad in the<br>
>> compilation procedure of openmpi1.4 or espresso.<br>
>><br>
>> OpenMPI Settings<br>
>><br>
>> ./configure --prefix=/local/openmpi --disable-dlopen<br>
>> --enable-mpirun-prefix-by-default --enable-static --enable-mpi-threads<br>
>> --with-valgrind --without-slurm --with-tm --without-xgrid<br>
>> --without-loadleveler --without-elan --without-gm --without-mx<br>
>> --with-udapl --without-psm CC=icc CXX=icpc F77=ifort FC=ifort<br>
>><br>
>> And the quantum espresso setting in the compilation are located here :<br>
>><br>
>> <a href="http://www.lpmd.cl/jperalta/uploads/Site/make-ompi.sys" target="_blank">http://www.lpmd.cl/jperalta/uploads/Site/make-ompi.sys</a><br>
>><br>
>> Really every help I appreciate a lot, because the difference in time<br>
>> calculations, is huge using the -np for my cases.<br>
>><br>
>> Joaquin Peralta<br>
>> Materials Science and Engineering<br>
>> Iowa State University<br>
>><br>
>> --<br>
>> ----------------------------------------------------<br>
>> Group of NanoMaterials<br>
>> ----------------------------------------------------<br>
>> <a href="http://www.gnm.cl" target="_blank">http://www.gnm.cl</a><br>
>> ----------------------------------------------------<br>
>> Joaquín Andrés Peralta Camposano<br>
>> ----------------------------------------------------<br>
>> <a href="http://www.lpmd.cl/jperalta" target="_blank">http://www.lpmd.cl/jperalta</a><br>
>><br>
>> In a world without frontiers,<br>
>> who needs Gates and Win.<br>
>> _______________________________________________<br>
>> Pw_forum mailing list<br>
>> <a href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a><br>
>> <a href="http://www.democritos.it/mailman/listinfo/pw_forum" target="_blank">http://www.democritos.it/mailman/listinfo/pw_forum</a><br>
><br>
><br>
> _______________________________________________<br>
> Pw_forum mailing list<br>
> <a href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a><br>
> <a href="http://www.democritos.it/mailman/listinfo/pw_forum" target="_blank">http://www.democritos.it/mailman/listinfo/pw_forum</a><br>
><br>
><br>
<br>
<br>
<br>
</div></div>--<br>
<div><div></div><div class="h5">----------------------------------------------------<br>
Group of NanoMaterials<br>
----------------------------------------------------<br>
<a href="http://www.gnm.cl" target="_blank">http://www.gnm.cl</a><br>
----------------------------------------------------<br>
Joaquín Andrés Peralta Camposano<br>
----------------------------------------------------<br>
<a href="http://www.lpmd.cl/jperalta" target="_blank">http://www.lpmd.cl/jperalta</a><br>
<br>
In a world without frontiers,<br>
who needs Gates and Win.<br>
_______________________________________________<br>
Pw_forum mailing list<br>
<a href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a><br>
<a href="http://www.democritos.it/mailman/listinfo/pw_forum" target="_blank">http://www.democritos.it/mailman/listinfo/pw_forum</a><br>
</div></div></blockquote></div><br>