[QE-users] Question regarding resource control for QE calculation
Yeon, Jejoon
jyeon at udel.edu
Wed Oct 9 22:13:32 CEST 2019
Hello
I'm not familiar with QE yet, so I'm not 100% sure about what might be the most efficient setting for my QE simulations. So far I submitted QE jobs based on very rough estimation for cores and memory. But now, I wish to learn more about how can I submit my relax/vc-relax jobs with the most efficient setting. So, please understand if my questions are very basic ones. I couldn't find enough information about those in QE manual.
1) How much memory (=core) I need to ask when I submit?
Our machine use Slurm system, and allocate ~5GB memory per one core, and 1 node has 36 cores.
In relaxation output file, I can see a line "Estimated total dynamical RAM > 38.86 GB" at the early part of the output file. This means I need at least 38.86 GB of total memory for this calculation, right?
If then, any surpass amount of memory would be wasted? For example, if I requested total 45GB of memory (=9 cores) for this simulation when I submit this job, does the 6.14 GB of memory will be wasted because it will never used?
Is this usually useful to request the exact amount of memory (=core) as the amount of "Estimated total dynamical RAM"? Or give more enough memory (=core)?
2) Node / core allocation.
Our machine use Slurm system, and allocate ~5GB memory per one core, and 1 node has 36 cores.
For the relaxation job which request "Estimated total dynamical RAM > 38.86 GB", I can call 8 cores with 5GB memory per core. In this case, for QE, is it more faster or efficient to request all 8 cores from the same single node or fewest nodes as possible? Or it doesn't matter to request different cores from all different nodes?
In other words, does QE calculation (relax or vc-relax) get speed benefit from using smallest node possible with given amount of cores? Or it doesn't matter?
Thank you
Best,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20191009/1de1dd05/attachment.html>
More information about the users
mailing list