[Pw_forum] How to know from the output file that the GPU part is really running in QE-GPU

Filippo Spiga spiga.filippo at gmail.com
Fri Sep 19 15:14:12 CEST 2014

Dear Karim,

I saw your emails, take it easy. Well... I am on holidays (almost) on the opposite side of the world. So I reply when I want and when I have time (because I am not paid to so).

Assuming you compiled QE-GPU correctly (with the configure placed inside GPU/), once you run a short header should appear at the very beginning. Something like this:


       GPU-accelerated Quantum ESPRESSO (svn rev. unknown)
       (parallel: Y )


     Program PWSCF v.5.1 (svn rev. 11174M) starts on 19Sep2014 at  2:32: 8

     This program is part of the open-source Quantum ESPRESSO suite
     for quantum simulation of materials; please cite
         "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
          URL http://www.quantum-espresso.org",
     in publications or presentations arising from this work. More details at

     Parallel version (MPI & OpenMP), running on      16 processor cores
     Number of MPI processes:                16
     Threads/MPI process:                     1
     R & G space division:  proc/nbgrp/npool/nimage =      16
     Reading input from ./pw-ausurf112_k-SCF_3.in

(this is done using checkout copy of QE from the SVN, the same should happen using QE v5.1). There are no other differences in the structure of the textual output. This has been done on purpose to let people continue to apply their own script to parse the output of PWscf. If you do not have those lines indicating that you are running GPU-accelerated Quantum ESPRESSO then send over the steps you followed to compile QE + QE-GPU, your environment and the config.log generated by the configure (located under "GPU/install/")

Remember you can always check if the GPU are used by log in to any of the compute nodes allocated to your job and run "nvidia-smi" (the only case you cannot do this is if you are using a CRAY XK* system). You will see your process/processes attached to the GPUs available on that node and a chunk of the memory of the GPUs allocated to one or more instances of pw-gpu.x


On Sep 18, 2014, at 11:13 PM, Karim Elgammal <egkarim at gmail.com> wrote
> Dears;
> I have now the latest QE-GPU version 14.06 installed with QE5.1 
> I run a simple scf calculation as a test by the following:
> export PHI_DGEMM_SPLIT=0.975
> export PHI_ZGEMM_SPLIT=0.975
> pw-gpu.x < G.scf.in > G.scf.out
> Is there any special variable to declare to be able to run on the GPU? as in the output it specifies that "Program PWSCF v.5.1 starts........." with nothing specified about "GPU".
> Can you kindly send me example output file from the GPU enabled code as well as example shell script for running the pw-gpu? 
> -- 
> Thank you and Best Regards;
> Karim Elgammal
> Sweden
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and may be privileged or otherwise protected from disclosure. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality and to advise the sender immediately of any error in transmission."

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20140919/9d3f3a91/attachment.html>

More information about the users mailing list