<div dir="ltr">Hi, <div><br></div><div>This is my first attempt on such systems and I used similar pw.x input to these papers: </div><div><a href="https://journals.aps.org/prb/abstract/10.1103/PhysRevB.101.085112">https://journals.aps.org/prb/abstract/10.1103/PhysRevB.101.085112</a></div><div><a href="https://journals.aps.org/prb/abstract/10.1103/PhysRevB.93.155104">https://journals.aps.org/prb/abstract/10.1103/PhysRevB.93.155104</a><br><div><div><br></div><div>I used PAW pps and 60 Ry wfc, when I first started calculating bands. Then I increased wfc cutoff to 100 Ry but still, my calculations were very slow.<br></div><div><br></div><div>Kind regards,</div><div>Zahra</div><div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr" style="color:rgb(136,136,136)"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Dec 14, 2020 at 10:18 AM Tobias Klöffel <<a href="mailto:tobias.kloeffel@fau.de">tobias.kloeffel@fau.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div>Hello Zahra,</div>
<div><br>
</div>
<div>why do you use PAW and 100Ry wfc
cutoff?</div>
<div><br>
</div>
<div>Kind regards,</div>
<div><br>
</div>
<div>On 12/14/20 11:13 AM, Zahra Khatibi
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hello,
<div><br>
</div>
<div>Sure. I've shared the input and output in the following
link:</div>
<div><a href="https://drive.google.com/drive/folders/1trdcWUw7GKSw0zLQouxygpaKwOl7_2KM?usp=sharing" target="_blank">https://drive.google.com/drive/folders/1trdcWUw7GKSw0zLQouxygpaKwOl7_2KM?usp=sharing</a></div>
<div><br>
</div>
<div>Kind regards,</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sat, Dec 12, 2020 at 5:01
PM Lorenzo Paulatto <<a href="mailto:paulatz@gmail.com" target="_blank">paulatz@gmail.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div> <br>
<blockquote type="cite">
<div dir="auto">
<div dir="auto">Aslo I have tried running the band
calculation on different systems (local pc with 12
nodes) and HPC (with 36 and 72 nodes). Every time I
have the same problem. I have tried QE 6.5 and 6.4 for
this calculation all with same issue.</div>
</div>
</blockquote>
<p><br>
</p>
<p>For comparison, I have here a calculation with 119
electrons, 10 k-points, 100 Ry kinetic energy cutoff. One
SCF iteration takes about 5 seconds on 32 CPUs (2 nodes of
a very old computing cluster that has since been retired).
From 120 to 190 electrons there should be around a factor
4 of CPU times. But it would be easier to say which is the
source of the discrepancy if you sent your input and
output files to teh list, to have a look</p>
<p><br>
</p>
<p>cheers<br>
</p>
<p><br>
</p>
<blockquote type="cite">
<div dir="auto">
<div dir="auto"><br>
</div>
<div dir="auto">All the best,</div>
<div dir="auto">Zahra</div>
<br>
<br>
<br>
<br>
<div class="gmail_quote" dir="auto">
<div dir="ltr" class="gmail_attr">On Fri, Dec 11,
2020, 22:22 Lorenzo Paulatto <<a href="mailto:paulatz@gmail.com" target="_blank">paulatz@gmail.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>Hello Zahra,</p>
<p>if I understand correctly, you manage to do the
scf calculation, but then the band calculation
is very slow. The cost per k-point of nscf
should be more or less the same as the cost per
k-point of one scf iteration. If it is not,
there is something wrong. One possible problem,
is that ecutwfc is interpreted differently
during nscf. A tight value (1.d-12 or less) may
cause the threshold of diagonalization in nscf
to become too small and very slow to converge.
This should be fixed in v 6.7, but you can just
increase ecutwfc in nscf if you're using a
previous version.</p>
<p>If not, it may be a problem with parallelism,
i.e. running on too many CPUs or some proper
human error like running with all the processes
on the same computing node. <br>
</p>
<p><br>
</p>
<p>cheers<br>
</p>
<div>On 2020-12-11 19:25, Zahra Khatibi wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Dear all,
<div><br>
</div>
<div>First of all, I hope everyone is safe and
well in these crazy times. </div>
<div>I'm calculating the electronic band
dispersion of a 2D heterostructure with a 59
atom unit cell. This system is a small
bandgap (10-20 meV) semiconductor. The
number of valence bands is (valence
electrons/2) 181. When I set 'nbnd' to 190,
the band structure calculation costs me 30
minutes for each k point on HPC with 72
processors. This means that if I do a simple
band calculation for a high symmetry path
with 100 points within, I have to wait
almost 50 hours! This even becomes worst
when I try to evaluate the band dispersion
with SOC switched on (twice the spin
degenerate band calculation). </div>
<div>Since the band dispersion evaluation is
the major part of our study, I was wondering
if there is a way around this problem, like
reducing the number of bands by only looking
at energy interval close to Fermi energy? </div>
<div>I could see that there are lots of papers
and studies in the literature with huge unit
cells and heavy atoms that have
presented numerous band structures (using
QE). So I really appreciate it if you could
help me here. </div>
<div><br>
</div>
<div>Kind regards,</div>
<div>
<div dir="ltr">
<div dir="ltr"><span style="color:rgb(136,136,136)">--</span><br style="color:rgb(136,136,136)">
<div dir="ltr" style="color:rgb(136,136,136)">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div><font size="2">Z.
Khatibi</font></div>
<div><span style="font-family:arial">School
of Physics</span><br>
</div>
<div><span style="font-family:arial;border-collapse:separate">Trinity
College Dublin<br>
</span></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<fieldset></fieldset>
<pre>_______________________________________________
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)
users mailing list <a href="mailto:users@lists.quantum-espresso.org" rel="noreferrer" target="_blank">users@lists.quantum-espresso.org</a>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></pre>
</blockquote>
</div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer
noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" rel="noreferrer" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote>
</div>
</div>
<br>
<fieldset></fieldset>
<pre>_______________________________________________
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" target="_blank">www.max-centre.eu</a>)
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></pre>
</blockquote>
</div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote>
</div>
<br>
<fieldset></fieldset>
<pre>_______________________________________________
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" target="_blank">www.max-centre.eu</a>)
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></pre>
</blockquote>
<p><br>
</p>
<pre cols="72">--
M.Sc. Tobias Klöffel
=======================================================
HPC (High Performance Computing) group
Erlangen Regional Computing Center(RRZE)
Friedrich-Alexander-Universität Erlangen-Nürnberg
Martensstr. 1
91058 Erlangen
Room: 1.133
Phone: +49 (0) 9131 / 85 - 20101
=======================================================
E-mail: <a href="mailto:tobias.kloeffel@fau.de" target="_blank">tobias.kloeffel@fau.de</a></pre>
</div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div>