[Pw_forum] PW.x homogeneous electric field berry phase calculation in trigonal cell

Louis Fry-Bouriaux ellf at leeds.ac.uk
Wed Feb 8 18:46:25 CET 2017


Hi Lorenzo,


    In your last email you mentioned the field 'pdir', I have not been able to find this in the documentation, is this what you meant? I have been using QE 6.0 source downloaded from the QE website.. It looks like the kstrings generation is hardcoded in one direction:


!  --- Find vector between consecutive points in strings ---
   dk(1)=xk(1,2)-xk(1,1)
   dk(2)=xk(2,2)-xk(2,1)
   dk(3)=xk(3,2)-xk(3,1)
   dkmod=SQRT(dk(1)**2+dk(2)**2+dk(3)**2)*tpiba
   IF (ABS(dkmod-gvec/(nppstr-1)) > eps) &
     CALL errore('c_phase','Wrong k-strings? ln 364',1)


as this is where the error is when I use other than gdir=3 (I added ln 364 to identify what fails exactly)


Thanks for the assistance!

Kindest regards,

Louis

________________________________
From: pw_forum-bounces at pwscf.org <pw_forum-bounces at pwscf.org> on behalf of Lorenzo Paulatto <lorenzo.paulatto at impmc.upmc.fr>
Sent: 08 February 2017 08:55:32
To: PWSCF Forum
Subject: Re: [Pw_forum] PW.x homogeneous electric field berry phase calculation in trigonal cell

On Tuesday, February 7, 2017 6:47:18 PM CET Louis Fry-Bouriaux wrote:
>     Interesting, I will try that, I will add that I tried the calculation
> with lelfield=.true. and all efield_cart values to zero and it gives me the
> same error. I also reduced the number of auto k points to speed up testing
> (auto: 6 2 2 0 0 0/ nppstr=6, which takes ~160s). I may take a look at the
> code maybe there is something that can be done if I identify what is going
> on :/

Hello,
I managed to find the original emails from 2012 about the issue; it used to be
much worst but Paolo Giannozzi wrote a quick fix that does make the issue much
lighter.

Still. the case gdir=3 (or pdir=3, if you are doing polarization) is much
faster than the other cases, and there is not easy solution. The reason is
that the code needs to build planes of G-vectors that are orthogonal to the
direction of the k-points string.

The bottleneck is a single call:
CALL mp_sum(aux_g(:), intra_bgrp_comm )
which collects a wavefunctions over all the CPUs.

This call becomes slower and slower when more CPUs are involved. It used to be
unbearably slow, now it is just slow.

There is no obvious solution, as the parallelisation is hard-coded along the z
direction. Try to stick to gdir=3 if you can.

hth

--
Dr. Lorenzo Paulatto
IdR @ IMPMC -- CNRS & Université Paris 6
+33 (0)1 44 275 084 / skype: paulatz
http://www.impmc.upmc.fr/~paulatto/
23-24/4é16 Boîte courrier 115,
4 place Jussieu 75252 Paris Cédex 05

_______________________________________________
Pw_forum mailing list
Pw_forum at pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20170208/5ad1e419/attachment.html>


More information about the users mailing list