[QE-users] Problem with Parallelization of Hp.x

Jamie Holber holber at umich.edu
Fri Dec 2 17:58:39 CET 2022


Hi Iurii,

Thanks for the quick resolution! 

In regards to the DFT+U+V I was following this line from the paper "for the sake of activating the Hubbard-related machinery (in the DFT+U+V case, there is no need to initialize V, instead initialize U for O(2p)).” Does this refer to something else?

If I output each HP runs to different output directories, how would I recombine the results at the end? 

Best,
Jamie 

> On Dec 2, 2022, at 11:44 AM, Iurii TIMROV <iurii.timrov at epfl.ch> wrote:
> 
> Dear Jamie,
> 
> First of all, please note that you are not performing the DFT+U+V calculation but DFT+U because you specified this:
> HUBBARD {ortho-atomic}
> U Fe1-3d 1e-10
> U Fe2-3d 1e-10
> U Mn1-3d 1e-10
> U Mn2-3d 1e-10
> U O-2p 1e-10
> 
> The pw.x code will perform DFT+U+V only if it detects at least one V parameter in the input. So you should specify e.g. this:
> HUBBARD {ortho-atomic}
> U Fe1-3d 1e-10
> U Fe2-3d 1e-10
> U Mn1-3d 1e-10
> U Mn2-3d 1e-10
> V Fe1-3d O-2p 1 5 1e-10
> 
> I specified some random couple: 1st and 5th atom. The HP code will consider all O atoms as Hubbard atoms and will perturb non-equivalent O atoms. 
> 
> Now, coming to your problem. Your calculations crash because you are using the same temporary directory for all calculations: outdir='outdir/'. So independent HP calculations write and read to/from the same folder and this creates a mess. Instead, for each independent calculation you need to use a separate temporary directory, e.g. outdir1, outdir2, etc. So once you performed  the ground-state calculation using pw.x, just copy outdir to outdir1, outdir2, ... and run independent HP calculations simultaneously:
> HP run 1 : perturb_only_atom(5) = .true. -> outdir1
> HP run 2 : perturb_only_atom(6) = .true. -> outdir2
> ...
> 
> HTH
> 
> Iurii
> 
> --
> Dr. Iurii TIMROV
> Senior Research Scientist
> Theory and Simulation of Materials (THEOS)
> Swiss Federal Institute of Technology Lausanne (EPFL)
> CH-1015 Lausanne, Switzerland
> +41 21 69 34 881
> http://people.epfl.ch/265334 <http://people.epfl.ch/265334>
> From: Jamie Holber <holber at umich.edu>
> Sent: Friday, December 2, 2022 5:04:00 PM
> To: Iurii TIMROV
> Cc: Quantum ESPRESSO users Forum
> Subject: Re: [QE-users] Problem with Parallelization of Hp.x
>  
> Just uploaded.
> 
> Thanks,
> Jamie 
> 
>> On Dec 2, 2022, at 10:56 AM, Iurii TIMROV <iurii.timrov at epfl.ch <mailto:iurii.timrov at epfl.ch>> wrote:
>> 
>> Thanks! Could you upload please also the input and output files of the pw.x calculations? 
>> 
>> Iurii
>> 
>> --
>> Dr. Iurii TIMROV
>> Senior Research Scientist
>> Theory and Simulation of Materials (THEOS)
>> Swiss Federal Institute of Technology Lausanne (EPFL)
>> CH-1015 Lausanne, Switzerland
>> +41 21 69 34 881
>> http://people.epfl.ch/265334 <http://people.epfl.ch/265334>
>>   
>> From: Jamie Holber <holber at umich.edu <mailto:holber at umich.edu>>
>> Sent: Friday, December 2, 2022 4:45:13 PM
>> To: Iurii TIMROV; Quantum ESPRESSO users Forum
>> Subject: Re: [QE-users] Problem with Parallelization of Hp.x
>>  
>> Hi Iurii,
>> 
>> Thanks for the response. I don’t believe it is a problem with disc space because when I run the hp calculations in serial or I run hp.x calculations at the same time with different outdir they run successfully.
>> 
>> I have uploaded the hp files to the google drive linked. I attempted to run hp_atom5.in and hp_atom6.in at the same time.
>> https://drive.google.com/drive/folders/1gRVALU0B8CLsaKBZpXzvEFAHcb5dMXZ_?usp=share_link <https://drive.google.com/drive/folders/1gRVALU0B8CLsaKBZpXzvEFAHcb5dMXZ_?usp=share_link>
>> 
>> Thank you,
>> Jamie 
>> 
>>> On Dec 2, 2022, at 6:04 AM, Iurii TIMROV via users <users at lists.quantum-espresso.org <mailto:users at lists.quantum-espresso.org>> wrote:
>>> 
>>> Dear Jamie,
>>> 
>>> The problem seems to occur when writing or reading files on a disc. Are you sure that you have enough free disc space?
>>> 
>>> Could you provide please more details about your calculations? Please share your input and output files for all pw.x and hp.x calculations that fail (e.g. using Google Drive). 
>>> 
>>> Iurii
>>> 
>>> --
>>> Dr. Iurii TIMROV
>>> Senior Research Scientist
>>> Theory and Simulation of Materials (THEOS)
>>> Swiss Federal Institute of Technology Lausanne (EPFL)
>>> CH-1015 Lausanne, Switzerland
>>> +41 21 69 34 881
>>> http://people.epfl.ch/265334 <http://people.epfl.ch/265334>
>>> From: users <users-bounces at lists.quantum-espresso.org <mailto:users-bounces at lists.quantum-espresso.org>> on behalf of Jamie Holber <holber at umich.edu <mailto:holber at umich.edu>>
>>> Sent: Thursday, December 1, 2022 9:03:19 PM
>>> To: users at lists.quantum-espresso.org <mailto:users at lists.quantum-espresso.org>
>>> Subject: [QE-users] Problem with Parallelization of Hp.x
>>>  
>>> Hello everyone,
>>> 
>>> I am trying to replicate the U+V calculations for LiMnFePo4 as described in Timrov, Iurii, Nicola Marzari, and Matteo Cococcioni. "HP--A code for the calculation of Hubbard parameters using density-functional perturbation theory." arXiv  preprint arXiv:2203.15684 (2022). <https://arxiv.org/pdf/2203.15684.pdf> The hp files run fine when I run them one at a time. However, when I try to parallelize over different perturbed atoms by running them simultaneously as described in section 4.2 I receive errors and at least one run fails.  I have tried it on two different computing cluster and they both failed, but with different error messages. I’ve included the input files/errors below. Does anyone know of a way to solve this issue?
>>> 
>>> Input file 1:
>>> &inputhp
>>>     prefix = 'olivine', outdir='outdir/',
>>>     nq1 = 1, nq2 = 2, nq3 = 3,
>>>     conv_thr_chi = 1.0d-7,
>>>     niter_max=250,
>>>     dist_thr = 5.D-3
>>>     perturb_only_atom(5) = .true.
>>> /
>>> 
>>> Input file 2:
>>> &inputhp
>>>     prefix = 'olivine', outdir='outdir/',
>>>     nq1 = 1, nq2 = 2, nq3 = 3,
>>>     conv_thr_chi = 1.0d-7,
>>>     niter_max=250,
>>>     dist_thr = 5.D-3
>>>     perturb_only_atom(6) = .true.
>>> /
>>> 
>>> Errors from computing cluster 1
>>> 
>>> Error termination. Backtrace:
>>> At line 700 of file buffers.f90 (unit = 20, file = 'outdir/olivine.wfc1')
>>> Fortran runtime error: File cannot be deleted
>>> 
>>> Error termination. Backtrace:
>>> #0  0x479784 in __buffers_MOD_close_buffer
>>> at /home/jholber/LFP/QE_studies/copy/qe-7.1/PW/src/buffers.f90:700
>>> #1  0x405a27 in hp_close_q_
>>> at /home/jholber/LFP/QE_studies/copy/qe-7.1/HP/src/hp_close_q.f90:28
>>> #2  0x4057c1 in hp_main
>>> at /home/jholber/LFP/QE_studies/copy/qe-7.1/HP/src/hp_main.f90:143
>>> #3  0x404fac in main
>>> at /home/jholber/LFP/QE_studies/copy/qe-7.1/HP/src/hp_main.f90:14
>>> 
>>> Errors from Computer clusters 2
>>> 
>>>  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>>>      Error in routine davcio (22):
>>>      error writing file "/home/holber/q-e/HP/examples/LFP/../../tempdir/HP/LFP.dwfc11"
>>>  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>>> 
>>> 
>>> Thank you,
>>> Jamie Holber
>>> University of Michigan 
>>> _______________________________________________
>>> The Quantum ESPRESSO community stands by the Ukrainian
>>> people and expresses its concerns about the devastating
>>> effects that the Russian military offensive has on their
>>> country and on the free and peaceful scientific, cultural,
>>> and economic cooperation amongst peoples
>>> _______________________________________________
>>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu <http://www.max-centre.eu/>)
>>> users mailing list users at lists.quantum-espresso.org <mailto:users at lists.quantum-espresso.org>
>>> https://lists.quantum-espresso.org/mailman/listinfo/users <https://lists.quantum-espresso.org/mailman/listinfo/users>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20221202/4438f042/attachment.html>


More information about the users mailing list