[QE-users] Re: Re: error on running turbo_lanczos.x with MPI

508682179 at qq.com
Mon Jan 17 12:39:39 CET 2022


Dear Iurii,


Yes, It can run successfully after setting d0psi_rs = .false.. Thanks for your help.




Kind regards
Weijie Zhou




---------------------------
University of Leeds
PhD student
Weijie Zhou






------------------ 原始邮件 ------------------
发件人:                                                                                                                        "Iurii TIMROV"                                                                                    <iurii.timrov at epfl.ch>;
发送时间: 2022年1月17日(星期一) 晚上6:13
收件人: "飘"<508682179 at qq.com>;"Quantum ESPRESSO users Forum"<users at lists.quantum-espresso.org>;

主题: Re: Re: [QE-users] error on running turbo_lanczos.x with MPI



  
Dear Weijie,
 

 
 
If you examine the output file of the turbo_lanczos.x calculation you will see the following message:
 

 
 

 Calculation of the dipole in real space
 Real space dipole + USPP is not supported
 

 

 
 
So either you should use norm-conserving pseudopotentials with the option  d0psi_rs = .true. or you can use ultrasoft pseudopotentials with the option d0psi_rs = .false.
 

 
 
HTH
 

 
 
Iurii
 

 
 
P.S.: Next time please share also the input and output files of the pw.x program.
 
 

 
     --
 Dr. Iurii TIMROV
 Senior Research Scientist
  Theory and Simulation of Materials (THEOS)
  Swiss Federal Institute of Technology Lausanne (EPFL)
 
   CH-1015 Lausanne, Switzerland
 +41 21 69 34 881
   http://people.epfl.ch/265334
 
 
 
 
 
 
 From: 飘 <508682179 at qq.com>
 Sent: Monday, January 17, 2022 6:35:54 AM
 To: Iurii TIMROV; Quantum ESPRESSO users Forum
 Subject: Re: [QE-users] error on running turbo_lanczos.x with MPI  
 
   Dear Iurii,
 
 
 As your suggestion, I tried QE7.0, but the error is still there. You can access the relative input & output files in
 
 
 https://drive.google.com/drive/folders/1BObhh63QFBB-oYX1su9aFFGWWTqfYhyn?usp=sharing
 
 
  I hope it can make some help. Thanks.
 
 
 Kind regards
 Weijie Zhou
 
 
 
 
  ---------------------------
 University of Leeds
 PhD student
 Weijie Zhou
 
 
 
  
 
 
 
 ------------------ 原始邮件 ------------------
  发件人: "Iurii TIMROV" <iurii.timrov at epfl.ch>;
 发送时间: 2022年1月13日(星期四) 晚上8:53
 收件人: "飘"<508682179 at qq.com>;"Quantum ESPRESSO users Forum"<users at lists.quantum-espresso.org>;
 
 主题: Re: [QE-users] error on running turbo_lanczos.x with MPI
 
 
 
  
Dear Weijie Zhou,
 

 
 
Can you try QE 7.0? If you still have the same problem, share your input and output files (via e.g. Google Drive).
 

 
 
HTH
 

 
 
Iurii
 
 

 
     --
 Dr. Iurii TIMROV
 Senior Research Scientist
  Theory and Simulation of Materials (THEOS)
  Swiss Federal Institute of Technology Lausanne (EPFL)
 
   CH-1015 Lausanne, Switzerland
 +41 21 69 34 881
   http://people.epfl.ch/265334
 
 
 
 
 
 
 From: users <users-bounces at lists.quantum-espresso.org> on behalf of 飘 via users <users at lists.quantum-espresso.org>
 Sent: Thursday, January 13, 2022 12:51:40 PM
 To: users
 Subject: [QE-users] error on running turbo_lanczos.x with MPI  
 
 Dear QE users, 
 
 I am using qe-6.5 version to run turbo_lanczos.x with MPI. It is fine to finish the calculation when using norm-conserving or optimized norm-conserving vanderbilt pseudopotentail, but the error happens when ultrasoft pseudopotential is used as:
 
 
       Program turboTDDFT v.6.5 starts on 12Jan2022 at 13:48:29 
 
 
      This program is part of the open-source Quantum ESPRESSO suite
      for quantum simulation of materials; please cite
          "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
          "P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017);
           URL http://www.quantum-espresso.org", 
      in publications or presentations arising from this work. More details at
      http://www.quantum-espresso.org/quote
 
 
      Parallel version (MPI), running on    16 processors
 
 
      MPI processes distributed on    16 nodes
      R & G space division:  proc/nbgrp/npool/nimage =      16
 
 
      Reading xml data from directory:
 
 
      ../../tmp_Mo_h_p_scf_lda_USPP/Mo_h_p.save/
 
 
      IMPORTANT: XC functional enforced from input :
      Exchange-correlation= PZ
                            (   1   1   0   0   0   0   0)
      Any further DFT definition will be discarded
      Please, verify this is what you really want
 
 
  
      Parallelization info
      --------------------
      sticks:   dense  smooth     PW     G-vecs:    dense   smooth      PW
      Min        2610    2610    651               401385   401385   50172
      Max        2612    2612    654               401396   401396   50178
      Sum       41777   41777  10437              6422239  6422239  802807
  
 
 
      Check: negative core charge=   -0.000003
 
 
      negative rho (up, down):  9.303E-01 0.000E+00
      Reading collected, re-writing distributed wavefunctions
  Symmetries are disabled for the gamma_only case
 
 
      Subspace diagonalization in iterative solution of the eigenvalue problem:
      a serial algorithm will be used
 
 
 
 
      =-----------------------------------------------------------------=
 
 
      Please cite the TDDFPT project as:
        O. B. Malcioglu, R. Gebauer, D. Rocca, and S. Baroni,
        Comput. Phys. Commun. 182, 1744 (2011)
      and
        X. Ge, S. J. Binnie, D. Rocca, R. Gebauer, and S. Baroni,
        Comput. Phys. Commun. 185, 2080 (2014)
      in publications and presentations arising from this work.
 
 
      =-----------------------------------------------------------------=
 
 
      Ultrasoft (Vanderbilt) Pseudopotentials
 
 
      Normal read
 
 
      Gamma point algorithm
 
 
      Calculation of the dipole in real space
           Real space dipole + USPP is not supported
 
 
 --------------------------------------------------------------------------
 mpirun has exited due to process rank 0 with PID 0 on
 node dc1s2b3c exiting improperly. There are three reasons this could occur:
 
 
 1. this process did not call "init" before exiting, but others in
 the job did. This can cause a job to hang indefinitely while it waits
 for all processes to call "init". By rule, if one process calls "init",
 then ALL processes must call "init" prior to termination.
 
 
 2. this process called "init", but exited without calling "finalize".
 By rule, all processes that call "init" MUST call "finalize" prior to
 exiting or it will be considered an "abnormal termination"
 
 
 3. this process called "MPI_Abort" or "orte_abort" and the mca parameter
 orte_create_session_dirs is set to false. In this case, the run-time cannot
 detect that the abort call was an abnormal termination. Hence, the only
 error message you will receive is this one.
 
 
 This may have caused other processes in the application to be
 terminated by signals sent by mpirun (as reported here).
 
 
 You can avoid this message by specifying -quiet on the mpirun command line.
 --------------------------------------------------------------------------
 
 
 
 If you have any clue about this error, please help me. Thank you.
 
 
 
 
 Best wishes,
 Weijie Zhou
 
 
 
 
 
 
 
 
 
 
 
 
 ---------------------------
 University of Leeds
 PhD student
 Weijie Zhou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20220117/9c9f14d7/attachment.html>


More information about the users mailing list