[QE-users] [QE-GPU] High GPU oversubscription detected

Yin-Ying Ting y.ting at fz-juelich.de
Thu Nov 30 10:24:29 CET 2023


Dear Paolo,

Thank you for your prompt response. Your suggestion was very helpful.

I have reviewed the numbers and discovered that regardless of the value set in --gres=gpu:X, the number of ndev remains consistently at 1. Our HPC documentation indicates that --gres=gpu:X is the correct method to set GPUs, with each node having 4 GPUs. Here is the output when I set --gres=gpu:4:

     GPU acceleration is ACTIVE.
     GPU-aware MPI enabled

     nproc (MPI process):                 4
     ndev (GPU per Node):                 1
     nnode (Nodes):                       1
     Message from routine print_cuda_info:
     High GPU oversubscription detected. Are you sure this is what you want?

I monitored GPU core usage every 10 seconds, it appears that all 4 GPU cores are activated when setting --gres=gpu:4:

utilization.gpu [%], utilization.memory [%]
96 %, 37 %
95 %, 76 %
95 %, 50 %
95 %, 76 %
time = 70 s

For reference, here is my sbatch submission script:

-----------------------------------------------------------------------

#!/bin/bash -x
#SBATCH --gres=gpu:4 --partition=dc-gpu
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --time=00:00:20

export OMP_NUM_THREADS=1

module load NVHPC/23.7-CUDA-12
module load CUDA/12
module load OpenMPI/4.1.5
module load mpi-settings/CUDA
module load imkl/2023.2.0

monitor_gpu_usage() {
    while true; do
        nvidia-smi --query-gpu=utilization.gpu,utilization.memory --format=csv >> gpu_usage_$SLURM_JOB_ID.csv
        sleep 10
    done
}
monitor_gpu_usage &
srun -n 4 pw.x -nk 4 -nd 1 -nb 1 -nt 1 < inp_pwscf > out_pwscf

-------------------------------------------------------------------------


Could you please provide guidance on resolving the oversubscription issue? Thank you very much in advance.

Kind regards,

Yin-Ying Ting


On 29.11.23 15:53, Paolo Giannozzi wrote:
On 11/27/23 11:32, Yin-Ying Ting wrote:

Based on the *environment.f90* file, this message is triggered when /nproc > ndev * nnode * 2/. If I understand correctly, I have nproc (Number of parallel processe)=4, ndev(Number of GPU Devices per Node) =4 and nnode (Number of Nodes)=1. This condition seems to be false (4 > 8). Despite this, the message still appears. All 4 GPUs were active during the run.

funny. Even funnier, the number of GPUs actually used does not seem to be written anywhere on output.

Add a line printing nproc, ndev, nnode just before the warning is issued, recompile and re-run. One (at least) of those numbers is not what you expect. Computers are not among the most reliable machines, but they should be able to find out who is larger between 4 and 8

Paolo
--

Forschungszentrum Jülich GmbH
Institute of Energy and Climate Research
Theory and Computation of Energy Materials (IEK-13)
E-mail: y.ting at fz-juelich.de<mailto:y.ting at fz-juelich.de>


------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Jülich GmbH
52425 Jülich
Sitz der Gesellschaft: Jülich
Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Stefan Müller
Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),
Karsten Beneke (stellv. Vorsitzender), Dr. Ir. Pieter Jansens
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20231130/1df6f8aa/attachment.html>


More information about the users mailing list