[QE-users] time consuming band structure calculation for a supercell

Zahra Khatibi za.khatibi at gmail.com
Mon Dec 14 11:32:41 CET 2020


Hi,

This is my first attempt on such systems and I used similar pw.x input to
these papers:
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.101.085112
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.93.155104

I used PAW pps and 60 Ry wfc, when I first started calculating bands. Then
I increased wfc cutoff to 100 Ry but still, my calculations were very slow.

Kind regards,
Zahra



On Mon, Dec 14, 2020 at 10:18 AM Tobias Klöffel <tobias.kloeffel at fau.de>
wrote:

> Hello Zahra,
>
> why do you use PAW and 100Ry wfc cutoff?
>
> Kind regards,
>
> On 12/14/20 11:13 AM, Zahra Khatibi wrote:
>
> Hello,
>
> Sure. I've shared the input and output in the following link:
>
> https://drive.google.com/drive/folders/1trdcWUw7GKSw0zLQouxygpaKwOl7_2KM?usp=sharing
>
> Kind regards,
>
> On Sat, Dec 12, 2020 at 5:01 PM Lorenzo Paulatto <paulatz at gmail.com>
> wrote:
>
>>
>> Aslo I have tried running the band calculation on different systems
>> (local pc with 12 nodes) and HPC (with 36 and 72 nodes). Every time I have
>> the same problem. I have tried QE 6.5 and 6.4 for this calculation all with
>> same issue.
>>
>>
>> For comparison, I have here a calculation with 119 electrons, 10
>> k-points, 100 Ry kinetic energy cutoff. One SCF iteration takes about 5
>> seconds on 32 CPUs (2 nodes of a very old computing cluster that has since
>> been retired). From 120 to 190 electrons there should be around a factor 4
>> of CPU times. But it would be easier to say which is the source of the
>> discrepancy if you sent your input and output files to teh list, to have a
>> look
>>
>>
>> cheers
>>
>>
>>
>> All the best,
>> Zahra
>>
>>
>>
>>
>> On Fri, Dec 11, 2020, 22:22 Lorenzo Paulatto <paulatz at gmail.com> wrote:
>>
>>> Hello Zahra,
>>>
>>> if I understand correctly, you manage to do the scf calculation, but
>>> then the band calculation is very slow. The cost per k-point of nscf should
>>> be more or less the same as the cost per k-point of one scf iteration. If
>>> it is not, there is something wrong. One possible problem, is that ecutwfc
>>> is interpreted differently during nscf. A tight value (1.d-12 or less) may
>>> cause the threshold of diagonalization in nscf to become too small and very
>>> slow to converge. This should be fixed in v 6.7, but you can just increase
>>> ecutwfc in nscf if you're using a previous version.
>>>
>>> If not, it may be a problem with parallelism, i.e. running on too many
>>> CPUs or some proper human error like running with all the processes on the
>>> same computing node.
>>>
>>>
>>> cheers
>>> On 2020-12-11 19:25, Zahra Khatibi wrote:
>>>
>>> Dear all,
>>>
>>> First of all, I hope everyone is safe and well in these crazy times.
>>> I'm calculating the electronic band dispersion of a 2D heterostructure
>>> with a 59 atom unit cell. This system is a small bandgap (10-20 meV)
>>> semiconductor. The number of valence bands is (valence electrons/2) 181.
>>> When I set 'nbnd' to 190, the band structure calculation costs me 30
>>> minutes for each k point on HPC with 72 processors. This means that if I do
>>> a simple band calculation for a high symmetry path with 100 points within,
>>> I have to wait almost 50 hours! This even becomes worst when I try to
>>> evaluate the band dispersion with SOC switched on (twice the spin
>>> degenerate band calculation).
>>> Since the band dispersion evaluation is the major part of our study, I
>>> was wondering if there is a way around this problem, like reducing the
>>> number of bands by only looking at energy interval close to Fermi energy?
>>> I could see that there are lots of papers and studies in the literature
>>> with huge unit cells and heavy atoms that have presented numerous band
>>> structures (using QE). So I really appreciate it if you could help me here.
>>>
>>> Kind regards,
>>> --
>>> Z. Khatibi
>>> School of Physics
>>> Trinity College Dublin
>>>
>>> _______________________________________________
>>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>>> users mailing list users at lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>>>
>>> _______________________________________________
>>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>>> users mailing list users at lists.quantum-espresso.org
>>> https://lists.quantum-espresso.org/mailman/listinfo/users
>>
>>
>> _______________________________________________
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>> users mailing list users at lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>>
>> _______________________________________________
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>> users mailing list users at lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
>
> _______________________________________________
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users at lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>
>
> --
> M.Sc. Tobias Klöffel
> =======================================================
> HPC (High Performance Computing) group
> Erlangen Regional Computing Center(RRZE)
> Friedrich-Alexander-Universität Erlangen-Nürnberg
> Martensstr. 1
> 91058 Erlangen
>
> Room: 1.133
> Phone: +49 (0) 9131 / 85 - 20101
>
> =======================================================
>
> E-mail: tobias.kloeffel at fau.de
>
> _______________________________________________
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users at lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20201214/e4bbc235/attachment.html>


More information about the users mailing list