[QE-users] 'Cholesky failed in aceupdate.' when using npools
Lars Blumenthal
lars.blumenthal11 at imperial.ac.uk
Wed May 30 15:49:55 CEST 2018
This is for future reference: With Paolo's help, I found out that I had
to recompile QE.
At first I was running the pwscf v.6.1 that was precompiled on ARCHER
and in that case the parallelisation over k-points didn't work when
using hybrid functionals.
However, it does work with the pwscf version (also v.6.1) I compiled myself.
Many thanks to Paolo and best wishes,
Lars
PhD Student
EPSRC Centre for Doctoral Training on Theory and Simulation of Materials
Imperial College London
On 30/05/18 12:36, Paolo Giannozzi wrote:
> I made a quick test on a reduced version of your job and found no
> problems, but the original job requires a larger machine and I have no
> time to work on it now.
>
> Paolo
>
> On Wed, May 30, 2018 at 11:58 AM, Lars Blumenthal
> <lars.blumenthal11 at imperial.ac.uk
> <mailto:lars.blumenthal11 at imperial.ac.uk>> wrote:
>
> Does anyone have any advice/feedback?
>
> Many thanks,
>
> Lars
> PhD Student
> EPSRC Centre for Doctoral Training on Theory and Simulation of
> Materials
> Imperial College London
>
>
> On 25/05/18 17:03, Lars Blumenthal wrote:
>> Hi everyone,
>>
>> I am trying to do scf calculations using the HSE functional with
>> PWSCF v.6.1 (svn rev. 13369).
>>
>> When I don't use parallelisation over k-points, i.e. when I don't
>> specify npools, the calculation runs successfully.
>> However, as soon as I try to make use of npools, the calculation
>> crashes with:
>>
>> DPOTRF exited with INFO= 7
>> Error in routine DPOTRF (1):
>> Cholesky failed in aceupdate.
>>
>> I have attached the corresponding output file. Previously I have
>> had the same issue with another compound but in that case npools
>> = 2 actually did work and it only crashed with the above error
>> when npools > 2. So it's not necessarily that the parallelisation
>> with npools doesn't work at all.
>>
>> Not using the ACE algorithm makes the calculation painfully slow
>> so I'd like to avoid that. Do you have any advice on how to
>> optimise the parallelisation of hybrid DFT calculations in general?
>>
>> Many thanks and best wishes,
>>
>> Lars Blumenthal
>> PhD Student
>> EPSRC Centre for Doctoral Training on Theory and Simulation of
>> Materials
>> Imperial College London
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> users at lists.quantum-espresso.org
>> <mailto:users at lists.quantum-espresso.org>
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>> <https://lists.quantum-espresso.org/mailman/listinfo/users>
>
>
> _______________________________________________
> users mailing list
> users at lists.quantum-espresso.org
> <mailto:users at lists.quantum-espresso.org>
> https://lists.quantum-espresso.org/mailman/listinfo/users
> <https://lists.quantum-espresso.org/mailman/listinfo/users>
>
>
>
>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
>
>
> _______________________________________________
> users mailing list
> users at lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20180530/a8110de6/attachment.html>
More information about the users
mailing list