[QE-users] Questions regarding parallelism and memory usage of ph.x calculations

Laurens Siemons laurenssiemons at hotmail.be
Tue Jun 11 11:53:35 CEST 2019


Dear QE-users,


I tried to run a ph.x calculation on 100 processors of a system with 5 unique k-points so I used the -nk 5 command to divide the calculation into 5 pools of 20 processors. After printing which representations will be calculated in this run, the output returns the following error message:


...

     Representation   469      1 modes -A  To be done

     Representation   470      1 modes -A  To be done

     Representation   471      1 modes -A  To be done

     Compute atoms:   149,  150,  151,  152,  153,  154,  155,  156,
  157,



===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 16398 RUNNING AT r3c4cn08.hopper
=   EXIT CODE: 9
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
   Intel(R) MPI Library troubleshooting guide:
      https://software.intel.com/node/561764
===================================================================================

However, if I perform the same calculation without parallellization over the unique k-points, the calculation achieves convergence without a problem.

Parallellization over the 5 unique k-points works without a problem in the corresponding pw.x calculation. Any idea what can cause this issue?


Except for this, I had an additional question regarding the memory usage of a ph.x calculation. While running such calculations, the memory usage goes up until approximately 200G. I'm performing calculations on a cluster and the amount of memory I can use is limited. Are there general tricks/methods to reduce the memory usage of ph.x calculations?


Thanks in advance,

Laurens Siemons

PhD Student, UAntwerp
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20190611/c7a407bb/attachment.html>


More information about the users mailing list