[QE-users] Problem with electrostatic potential computation

Abhirup Patra abhirupp at sas.upenn.edu
Fri Oct 11 20:23:14 CEST 2019


Thanks, Paolo.

The developer version did work with multiple processors.

Best,
Abhirup
-------------------------------------------------------------------------------------------------------------------------------------
Abhirup Patra
Postdoctoral Research Fellow
Department of Chemistry
University of Pennsylvania


On Fri, Oct 11, 2019 at 7:12 AM Paolo Giannozzi <p.giannozzi at gmail.com>
wrote:

> There were a few glitches in the 6.4.1 pp.x code. Please try the
> development version. If it still doesn't work, please provide a test that
> can be re-run.
>
> Note that most postprocessing codes should be run on a single processor
> (although all of them should work in parallel as well, even if not
> parallelizing anything).
>
> Paolo
>
>
> On Fri, Oct 11, 2019 at 1:35 AM Abhirup Patra <abhirupp at sas.upenn.edu>
> wrote:
>
>> Dear Users,
>>
>> I have been trying to compute the local electrostatic potential (Hartree
>> part to be specific) using the post-processing tool of QE- pp.x using the
>> following inputs:
>> ---------------------------------------------------
>> cat > Hartree_pot.in << !
>> &inputpp
>> prefix = 'p1_pristine_sp'
>> outdir = './'
>> filplot = 'Hartree_pot.dat'
>> plot_num = 11
>> /
>> &plot
>> nfile = 1
>> iflag = 3
>> filepp(1) = 'p1_pristine_sp_pot.dat'
>> weight(1) = 1
>> output_format = 5
>> fileout = 'p1_pristine_sp.xsf'
>> /
>> !
>>
>> mpirun -np $np  pp.x  < Hartree_pot.in > p1_hartree_sp.out
>>
>> -------------------------------------------------------------------------------
>> However, I keep getting the following error related to the MPI:
>>
>>
>>      Program POST-PROC v.6.4.1 starts on 10Oct2019 at 18:57: 2
>>
>>      This program is part of the open-source Quantum ESPRESSO suite
>>      for quantum simulation of materials; please cite
>>          "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
>>          "P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017);
>>           URL http://www.quantum-espresso.org",
>>      in publications or presentations arising from this work. More
>> details at
>>      http://www.quantum-espresso.org/quote
>>
>>      Parallel version (MPI), running on    16 processors
>>
>>      MPI processes distributed on     1 nodes
>>      R & G space division:  proc/nbgrp/npool/nimage =      16
>>
>>      IMPORTANT: XC functional enforced from input :
>>      Exchange-correlation      = PBE ( 1  4  3  4 0 0)
>>      Any further DFT definition will be discarded
>>      Please, verify this is what you really want
>>
>>
>>      Parallelization info
>>      --------------------
>>      sticks:   dense  smooth     PW     G-vecs:    dense   smooth      PW
>>      Min         560     297     81                39794    15501    2217
>>      Max         561     298     82                39797    15504    2222
>>      Sum        8969    4765   1305               636715   248049   35513
>>
>>      Generating pointlists ...
>>      new r_m :   0.0851 (alat units)  1.6486 (a.u.) for type    1
>>      new r_m :   0.0672 (alat units)  1.3020 (a.u.) for type    2
>>      new r_m :   0.0672 (alat units)  1.3020 (a.u.) for type    3
>>
>>      Calling punch_plot, plot_num =  11
>> --------------------------------------------------------------------------
>> A process has executed an operation involving a call to the
>> "fork()" system call to create a child process.  Open MPI is currently
>> operating in a condition that could result in memory corruption or
>> other system errors; your job may hang, crash, or produce silent
>> data corruption.  The use of fork() (or system() or other calls that
>> create child processes) is strongly discouraged.
>>
>> The process that invoked fork was:
>>
>>   Local host:          [[61547,1],0] (PID 37849)
>>
>> If you are *absolutely sure* that your application will successfully
>> and correctly survive a call to fork(), you may disable this warning
>> by setting the mpi_warn_on_fork MCA parameter to 0.
>> --------------------------------------------------------------------------
>> --------------------------------------------------------------------------
>> mpirun noticed that process rank 9 with PID 0 on node b144 exited on
>> signal 6 (Aborted).
>> --------------------------------------------------------------------------
>>
>> I have tried compiling the pp.x several times and reran the calculations
>> even using only a single node, but no luck yet. I am not running it on the
>> local machine, I am running it in the cluster so there should not be any
>> memory issues. I am using openmpi version 2.1.1.
>>
>> Any help would be appreciated.
>>
>> Best,
>> Abhirup
>>
>> -------------------------------------------------------------------------------------------------------------------------------------
>> Abhirup Patra
>> Postdoctoral Research Fellow
>> Department of Chemistry
>> University of Pennsylvania
>> _______________________________________________
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
>> users mailing list users at lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
>
>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
> _______________________________________________
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users at lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20191011/1c98733b/attachment.html>


More information about the users mailing list