[QE-users] Error while running QE
Chandan Kumar Choudhury
ckchoud at g.clemson.edu
Mon Mar 15 12:55:29 CET 2021
Hello QE users:
I get the following errors with running pw.x on a 96 GM RAM AMD EPYC machine
free(): invalid next size (fast)
Is this because of RAM or incorrect compilation of QE? Any suggestions would be very helpful.
Thank you!
Snippet of output file:
Program PWSCF v.6.7MaX starts on 15Mar2021 at 6: 3:30
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
"P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
Parallel version (MPI & OpenMP), running on 2304 processor cores
Number of MPI processes: 48
Threads/MPI process: 48
MPI processes distributed on 1 nodes
R & G space division: proc/nbgrp/npool/nimage = 48
Waiting for input...
Reading input from standard input
Warning: card &CELL ignored
Warning: card / ignored
Current dimensions of program PWSCF are:
Max number of different atomic species (ntypx) = 10
Max number of k-points (npk) = 40000
Max angular momentum in pseudopotentials (lmaxx) = 3
gamma-point specific algorithms are used
Subspace diagonalization in iterative solution of the eigenvalue problem:
one sub-group per band group will be used
scalapack distributed-memory algorithm (size of sub-group: 6* 6 procs)
...
…
Dense grid: 3519231 G-vectors FFT dimensions: ( 240, 240, 240)
Smooth grid: 890216 G-vectors FFT dimensions: ( 160, 160, 160)
Estimated max dynamical RAM per process > 168.49 MB
Estimated total dynamical RAM > 7.90 GB
Atomic positions and unit cell read from directory:
./PEO_tri.save/
Message from routine qexsd_readschema :
xml data file ./PEO_tri.save/data-file-schema.xml not found
Nothing found: using input atomic positions and unit cell
...
...
Writing output data file ./PEO_tri.save/
free(): invalid next size (fast)
[qm-qe-1:04802] *** Process received signal ***
[qm-qe-1:04802] Signal: Aborted (6)
[qm-qe-1:04802] Signal code: (-6)
[qm-qe-1:04802] [ 0] /usr/lib64/libpthread.so.0(+0x12b30)[0x7fc5013e7b30]
[qm-qe-1:04802] [ 1] /usr/lib64/libc.so.6(gsignal+0x10f)[0x7fc50104984f]
[qm-qe-1:04802] [ 2] /usr/lib64/libc.so.6(abort+0x127)[0x7fc501033c45]
[qm-qe-1:04802] [ 3] /usr/lib64/libc.so.6(+0x7a9d7)[0x7fc50108c9d7]
[qm-qe-1:04802] [ 4] /usr/lib64/libc.so.6(+0x81ddc)[0x7fc501093ddc]
[qm-qe-1:04802] [ 5] /usr/lib64/libc.so.6(+0x83778)[0x7fc501095778]
[qm-qe-1:04802] [ 6] /home/chandan_prescience_in/softwares/aocc-compiler-2.3.0/lib/libflang.so(f90_dealloc03a_i8+0xad)[0x7fc5029909dd]
[qm-qe-1:04802] [ 7] pw.x[0x12db14a]
[qm-qe-1:04802] [ 8] pw.x[0x1248b93]
[qm-qe-1:04802] [ 9] pw.x[0x95cdd0]
[qm-qe-1:04802] [10] pw.x[0x95cba8]
[qm-qe-1:04802] [11] pw.x[0x95cad4]
[qm-qe-1:04802] [12] pw.x[0x95b20b]
[qm-qe-1:04802] [13] pw.x[0x9764e0]
[qm-qe-1:04802] [14] pw.x[0x70091a]
[qm-qe-1:04802] [15] pw.x[0x6fa425]
[qm-qe-1:04802] [16] pw.x[0x72c5ec]
[qm-qe-1:04802] [17] pw.x[0x4caa55]
[qm-qe-1:04802] [18] pw.x[0x1a03326]
[qm-qe-1:04802] [19] /usr/lib64/libc.so.6(__libc_start_main+0xf3)[0x7fc501035803]
[qm-qe-1:04802] [20] pw.x[0x4ca7fe]
[qm-qe-1:04802] *** End of error message ***
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node qm-qe-1 exited on signal 6 (Aborted).
--------------------------------------------------------------------------
--
Chandan Kumar Choudhury, PhD
Senior Scientist (Computational Science)
Prescience.in
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20210315/30ac2f4f/attachment.html>
More information about the users
mailing list