[QE-users] QE Parallelization for Phonon Dispersion calculations

Thomas Brumme thomas.brumme at tu-dresden.de
Tue Nov 24 12:40:26 CET 2020


Dear Kiran,


please have a look at the GRID_example in the PHonon examples and there the run_example_1? in particular.


Regards


Thomas

--
Dr. rer. nat. Thomas Brumme
Theoretical chemistry
TU Dresden - BAR / II49
Helmholtzstr. 18
01069 Dresden

Tel:  +49 (0)351 463 40844

email: thomas.brumme at tu-dresden.de

________________________________
Von: users <users-bounces at lists.quantum-espresso.org> im Auftrag von Kiran Yadav <kiranyadav0816 at gmail.com>
Gesendet: Dienstag, 24. November 2020 10:45
An: Quantum ESPRESSO users Forum
Betreff: Re: [QE-users] QE Parallelization for Phonon Dispersion calculations

Dear Lorenzo,
Thanks a lot for the great help.
If possible, please suggest some reference on how to run each q-point simultaneously or do as you have suggested.

Thanks & Regards,
Kiran Yadav




On Tue, Nov 24, 2020 at 3:09 PM Lorenzo Paulatto <paulatz at gmail.com<mailto:paulatz at gmail.com>> wrote:

Images are only used inside each q-point, if I remember correctly, so it would not matter in our question.

But, you can decrease walltime much more by running each q-point simultaneously and independently using startq, startq options and a job for each. It is sufficient to use a different prefix or outdir. The data from pw can be copied or repeated, it does not usually matter in terms of CPU time.

cheers

On 2020-11-24 10:27, Kiran Yadav wrote:
Dear Lorenzo,
Is there any parallelization method so that I can make images such that 20 dyn files will get distributed unequally on the equal number of processors or dyn files will get distributed equally on unequal number of processors ?

Because in my case as per my observation dyn1-10 are not taking too much time, most of the time has been spent on dyn11-dyn20 generation. So, If I could run dyn 1-10  on one image and distribute remaining dyn11-20 on 3 images CPU walltime can be decreased. Is it possible to do something like that?

I tried parallelization on 256 processors (#PBS -l select=16:ncpus=16) for 10hrs using the following parallelization command
time -p mpirun -np $PBS_NTASKS ph.x -ni 4 -nk 4 -nt 4 -nd 16 -input  ph.in<http://ph.in/> > ph.out
in this case 20dyn files got distributed in 4 images i.e 5 dyn on one image.  256/4=

Thanks & Regards,
Kiran Yadav
Research Scholar
Electronic Materials Laboratory (TX-200G)
Dept. of Materials Science & Engineering
Indian Institute of Technology, Delhi




On Tue, Nov 24, 2020 at 1:19 PM Lorenzo Paulatto <paulatz at gmail.com<mailto:paulatz at gmail.com>> wrote:

> I have been trying to calculate Phonon dispersion with 6*6*6 nq grid.
> It generates dyn0+20 other dynamical matrices, but the time taken by
> each dynamical matrix file is different for completion. I ran these
> phonon dispersion calculations using parallelization, but couldn't
> optimize correctly.

This is normal, because different q-points have different symmetries,
the code has to use different numbers of k-points.


regards


_______________________________________________
Quantum ESPRESSO is supported by MaX (www.max-centre.eu<http://www.max-centre.eu>)
users mailing list users at lists.quantum-espresso.org<mailto:users at lists.quantum-espresso.org>
https://lists.quantum-espresso.org/mailman/listinfo/users



_______________________________________________
Quantum ESPRESSO is supported by MaX (www.max-centre.eu<http://www.max-centre.eu>)
users mailing list users at lists.quantum-espresso.org<mailto:users at lists.quantum-espresso.org>
https://lists.quantum-espresso.org/mailman/listinfo/users

_______________________________________________
Quantum ESPRESSO is supported by MaX (www.max-centre.eu<http://www.max-centre.eu>)
users mailing list users at lists.quantum-espresso.org<mailto:users at lists.quantum-espresso.org>
https://lists.quantum-espresso.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20201124/e59ece96/attachment.html>


More information about the users mailing list