[QE-users] MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 1
Microsoft.com team
hammad_tarek at hotmail.com
Mon Aug 30 12:12:37 CEST 2021
Dear Prof.
Lorenzo Paulatto
Thanks a lot for your help. Yes it was a problem of wrong units and it is solved after using the correct units.
Kind regards
Dr. Tarek Hammad.
________________________________
From: users <users-bounces at lists.quantum-espresso.org> on behalf of Lorenzo Paulatto <paulatz at gmail.com>
Sent: Saturday, August 28, 2021 7:09 PM
To: QuanLorenzo Paulattoutum ESPRESSO users Forum <users at lists.quantum-espresso.org>
Cc: Quantum ESPRESSO users Forum <users at lists.quantum-espresso.org>
Subject: Re: [QE-users] MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 1
1.
you're using a lattice parameter which is about half that of diamond, probably wrong unit of measure ?
2.
it wont't give any meaningful result without k-points
3.
apart from that, increasing the cutoff and/or not using many CPUs to run a tiny little system that can run on my phone should solve the issue
regards
--
Lorenzo Paulatto - Paris
On Aug 28 2021, at 7:30 pm, Microsoft.com team <hammad_tarek at hotmail.com> wrote:
Dear QE team and users
I am using QE version 6.7.0 compiled with gfortran in parallel. My operating system is ubuntu 20.04.3 LTS - 64-bit.
To test ElaStic code I have constructed the structure file called "c.elastic.in" attached to this mail.
Therefore, I generated a number of distorted structures like for instance "Dst01_01.in" and I applied the command
"mpirun -np 2 /home/tarek/software/q-e-qe-6.7.0/bin/pw.x <Dst01_01.in> Dst01_01.out"
However, I have got this message as read from terminal:
----------------------------------------------------------------------------------------------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[tarek-Latitude-E6430:56552] PMIX ERROR: UNREACHABLE in file ../../../src/server/pmix_server.c at line 2193
[tarek-Latitude-E6430:56552] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[tarek-Latitude-E6430:56552] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
------------------------------------------------------------------------------------------------------------------------------------------------------------------
The output and CRASH files are attached with this mail.
Thanks a lot for your great efforts.
Tarek Hammad.
_______________________________________________
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users at lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20210830/59f9fd44/attachment.html>
More information about the users
mailing list