<div dir="ltr">Dear Lorenzo,<div>Thanks a lot for the great help.</div><div>If possible, please suggest some reference on how to run each q-point simultaneously or do as you have suggested.</div><div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><font face="verdana, sans-serif">Thanks & Regards,<br></font></div><div dir="ltr"><font face="verdana, sans-serif">Kiran Yadav</font></div><div dir="ltr"><div><br></div></div></div><div dir="ltr"><div><i><br></i></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Nov 24, 2020 at 3:09 PM Lorenzo Paulatto <<a href="mailto:paulatz@gmail.com">paulatz@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>Images are only used inside each q-point, if I remember
correctly, so it would not matter in our question.<br>
</p>
<p>But, you can decrease walltime much more by running each q-point
simultaneously and independently using startq, startq options and
a job for each. It is sufficient to use a different prefix or
outdir. The data from pw can be copied or repeated, it does not
usually matter in terms of CPU time.</p>
<p>cheers<br>
</p>
On 2020-11-24 10:27, Kiran Yadav wrote:<br>
<blockquote type="cite">
<div dir="ltr">
<div>Dear Lorenzo,</div>
<div>Is there any parallelization method so that I can make
images such that 20 dyn files will get distributed unequally
on the equal number of processors or dyn files will get
distributed equally on unequal number of processors ?</div>
<div><br>
</div>
<div>Because in my case as per my observation dyn1-10 are
not taking too much time, most of the time has been spent on
dyn11-dyn20 generation. So, If I could run dyn 1-10 on one
image and distribute remaining dyn11-20 on 3 images CPU
walltime can be decreased. Is it possible to do something like
that? </div>
<div><br>
</div>
<div>I tried parallelization on 256 processors (#PBS -l
select=16:ncpus=16) for 10hrs using the
following parallelization command</div>
<div>time -p mpirun -np $PBS_NTASKS ph.x -ni 4 -nk 4 -nt 4 -nd
16 -input <a href="http://ph.in/" target="_blank">ph.in</a> > ph.out<br>
</div>
<div>in this case 20dyn files got distributed in 4 images i.e 5
dyn on one image. 256/4=</div>
<div><br>
</div>
<div>
<div dir="ltr">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr"><font face="verdana, sans-serif">Thanks
& Regards,<br>
</font></div>
<div dir="ltr"><font face="verdana, sans-serif">Kiran
Yadav</font></div>
<div dir="ltr">
<div><font face="verdana,
sans-serif">Research
Scholar</font></div>
<div><font face="verdana,
sans-serif">Electronic
Materials Laboratory
(TX-200G)</font></div>
<div><font face="verdana,
sans-serif">Dept. of
Materials Science &
Engineering</font></div>
<div><font face="verdana,
sans-serif">Indian
Institute of Technology,
Delhi</font></div>
</div>
</div>
<div dir="ltr">
<div><i><br>
<br>
</i></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Tue, Nov 24, 2020 at 1:19
PM Lorenzo Paulatto <<a href="mailto:paulatz@gmail.com" target="_blank">paulatz@gmail.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
> I have been trying to calculate Phonon dispersion with
6*6*6 nq grid. <br>
> It generates dyn0+20 other dynamical matrices, but
the time taken by <br>
> each dynamical matrix file is different for completion. I
ran these <br>
> phonon dispersion calculations using parallelization, but
couldn't <br>
> optimize correctly.<br>
<br>
This is normal, because different q-points have different
symmetries, <br>
the code has to use different numbers of k-points.<br>
<br>
<br>
regards<br>
<br>
<br>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote>
</div>
<br>
<fieldset></fieldset>
<pre>_______________________________________________
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" target="_blank">www.max-centre.eu</a>)
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></pre>
</blockquote>
</div>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu" rel="noreferrer" target="_blank">www.max-centre.eu</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div>