[Pw_forum] On the run of GPU version of QE
Alexander G. Kvashnin
agkvashnin at gmail.com
Wed Mar 26 08:33:55 CET 2014
Dear Filippo,
Thank you for your quick response.
I recompiled it without magma, that error disappeared and now it works.
But I have another question. I tried to test the perfomance of QE-GPU
compared with QE without GPU acceleration. If I understand correctly, my
command for run should looks like the following
mpirun -np 2 /qe-dir/bin/pw-gpu.x -in input > output
And it will start with 2 MPI process and get 2 GPUs on my host. Is it
correct?
Thanks!
*--*
*Sincerely yours,*
*Alexander G. Kvashnin *
*===================================================== PhD Student Moscow
Institute of Physics and Technology http://mipt.ru/
<http://s.wisestamp.com/links?url=http%3A%2F%2Fmipt.ru%2F&sn=>*
*141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia*
*Junior research scientist Technological Institute for Superhard and Novel
Carbon Materials http://www.tisnum.ru/
<http://s.wisestamp.com/links?url=http%3A%2F%2Fwww.tisnum.ru%2F&sn=>
<http://s.wisestamp.com/links?url=http%3A%2F%2Fwww.ntcstm.troitsk.ru%2F&sn=>142190,
Central'naya St. 7a, Troitsk, Moscow Region,
Russia=====================================================*
On 26 March 2014 00:41, Filippo Spiga <spiga.filippo at gmail.com> wrote:
> Dear Alexander,
>
> On Mar 25, 2014, at 5:29 AM, Alexander G. Kvashnin <agkvashnin at gmail.com>
> wrote:
> > Dear QE users and developers,
> >
> > Recently I compiled QE-GPU on my system with GPU NVidia Tesla K20Xm.
> > I have CUDA ver. 5.5.
> > Intel MPI 4.1.3.048
> > Intel Compiler 14.0
> >
> > I compiled GPU-QE using the instruction on the website. I downloaded
> espresso-5.0.2, than downloaded QE-GPU-14.01.0 with patch file. After the
> patching I type a configuration command:
> >
> > ./configure
> LAPACK_LIBS=/opt/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64/libmkl_lapack95_lp64.a
> --enable-openmp --enable-parallel --enable-cuda --with-gpu-arch=30
> --with-cuda-dir=/opt/cuda/5.5 --enable-phigemm --enable-magma
>
> "--with-phigemm" and "--with-magma" are the correct. i will add a check
> that, if someone spell those wrong, the configure step will halt
> completely. I am going to check the website and the README too to verify
> there is consistency in the instructions.
>
> I do suggest to do _not_ specify any BLAS/LAPACK/SCALAPACK variable in the
> configure line but edit the make.sys after!
>
>
>
> > After successful configuration I compile pw-gpu.x and everything was
> fine. Then I tried to test the performance of this system compared with
> common HPC.
> > But I have troubles. As I understand commonly 1 process goes to 1GPU, so
> I need to run 2 MPI processes on my host with 2 GPUs. If I am wrong,
> please, tell me where? After the start I have the error message:
> >
> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> > task # 0
> > from cdiaghg : error # 61
> > diagonalization (ZHEGV*) failed
> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
> Probably because MAGMA is used (there should be a message on top of the
> output). Next version will print a lot of information on top of the file so
> it is possible to understand how the application has been compiled and what
> features are active.
>
> Can you try recompiling using "--without-magma"?
>
> Thanks
> F
>
> --
> Mr. Filippo SPIGA, M.Sc.
> http://www.linkedin.com/in/filippospiga ~ skype: filippo.spiga
>
> «Nobody will drive us out of Cantor's paradise.» ~ David Hilbert
>
> *****
> Disclaimer: "Please note this message and any attachments are CONFIDENTIAL
> and may be privileged or otherwise protected from disclosure. The contents
> are not to be disclosed to anyone other than the addressee. Unauthorized
> recipients are requested to preserve this confidentiality and to advise the
> sender immediately of any error in transmission."
>
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20140326/16b6007b/attachment.html>
More information about the users
mailing list