[QE-users] Installation verification
Paolo Giannozzi
p.giannozzi at gmail.com
Tue Apr 30 14:49:54 CEST 2019
The MPI run-time environment (e.g. mpirun) must be consistent with the MPI
libraries you are linking. Be very careful when setting your PATH, LIBPATH,
etc. environment variables. They must be the same when you configure,
compile, and run.
Paolo
On Mon, Apr 29, 2019 at 3:07 PM Mahmood Naderan <mahmood.nt at gmail.com>
wrote:
> Hi
> I want to know if I have correctly built qe with openmpi-4.0. So, I ran
> the following command and got this error
>
> [root at rocks7 q-e-qe-6.4]# ./configure
> --prefix=/share/apps/softwares/q-e-qe-6.4
> MPIF90=/share/apps/softwares/openmpi-4.0.1/bin/mpif90
> ...
> [root at rocks7 q-e-qe-6.4]# make all
> ...
> [mahmood at rocks7 job]$ /share/apps/softwares/openmpi-4.0.1/bin/mpirun
> /share/apps/softwares/q-e-qe-6.4/bin/pw.x
> --------------------------------------------------------------------------
> As of version 3.0.0, the "sm" BTL is no longer available in Open MPI.
>
> Efficient, high-speed same-node shared memory communication support in
> Open MPI is available in the "vader" BTL. To use the vader BTL, you
> can re-run your job with:
>
> mpirun --mca btl vader,self,... your_mpi_application
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> A requested component was not found, or was unable to be opened. This
> means that this component is either not installed or is unable to be
> used on your system (e.g., sometimes this means that shared libraries
> that the component requires are unable to be found/loaded). Note that
> Open MPI stopped checking at the first component that it did not find.
>
> Host: rocks7.jupiterclusterscu.com
> Framework: btl
> Component: sm
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> mca_bml_base_open() failed
> --> Returned "Not found" (-13) instead of "Success" (0)
> --------------------------------------------------------------------------
> [rocks7:19796] *** An error occurred in MPI_Init
> [rocks7:19796] *** reported by process [85917697,3255307776955514882]
> [rocks7:19796] *** on a NULL communicator
> [rocks7:19796] *** Unknown error
> [rocks7:19796] *** MPI_ERRORS_ARE_FATAL (processes in this communicator
> will now abort,
> [rocks7:19796] *** and potentially your MPI job)
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] 31 more processes have sent help
> message help-mpi-btl-sm.txt / btl sm is dead
> [rocks7.jupiterclusterscu.com:19784] Set MCA parameter
> "orte_base_help_aggregate" to 0 to see all help / error messages
> [rocks7.jupiterclusterscu.com:19784] 8 more processes have sent help
> message help-mca-base.txt / find-available:not-valid
> [rocks7.jupiterclusterscu.com:19784] 4 more processes have sent help
> message help-mpi-runtime.txt / mpi_init:startup:internal-failure
> [rocks7.jupiterclusterscu.com:19784] 3 more processes have sent help
> message help-mpi-errors.txt / mpi_errors_are_fatal unknown handle
>
>
>
> Is that normal and need input file? I also tried with a sample input file
> but got the same error.
>
>
> Regards,
> Mahmood
>
>
> _______________________________________________
> Quantum Espresso is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users at lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
--
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20190430/7270803a/attachment.html>
More information about the users
mailing list