[Pw_forum] DOS and pDOS parallelization

Madhura Marathe madhura at jncasr.ac.in
Thu Aug 13 19:06:02 CEST 2009


 Thanks for the explanation. That was really helpful.

 -Madhura

>
> Dear Madhura,
>
>>
>>  Thanks for the explanation.
>>  I had always thought that DOS code also requires wavefunction files;
>> and
>> that is why one needs to give the same prefix and outdir path as in scf
>> calculations. If code does not need wavefunctions, which files it uses
>> for DOS calc.s?
>
> the code needs mainly the eigenvalues, which are stored in
> $prefix.save/K?????/eigenval.xml  files,
> where K????? (i.e. K00001, K00002, etc ) correspond to each kpt.
>
>
>>  As per suggestion, I have run PDOS calc.s with the same number of pools
>> and processors (in case of B) and got results. That means PDOS code is
>> parallelized with pool implementation, but not DOS.
>>
>
> yes you are right,
> this means that pDOS is parallelized over G vectors and pools, while
> DOS calculation is not parallelized but is anyway compatible with
> the parallel environment (i.e. it does not crash if run parallel)
>
> andrea
>
>>  Thanks once again,
>>  Madhura.
>>
>> >
>> > Dear Madhura,
>> >
>> > the issue of using the same parallelism scheme (# of processors, # of
>> > pools, etc) in postprocessing calculations is mainly concerned with
>> the
>> > need to read wfcs. As you wrote, if you specify wf_collect = .TRUE.
>> you
>> > completely remove this issue (you can read wfcs whatever the
>> > parallelization
>> > scheme you used to produce them).
>> >
>> >>From this point of view, the main difference between DOS and pDOS
>> > calculations is that the former does not need to read wfcs, while the
>> > latter does.
>> >
>> > Moreover, the taks the DOS program has to perform is so inexpensive
>> that
>> > it is performed by a single processor even if the code is run in
>> parallel.
>> > This is consistent with your observation at (ii).
>> >
>> > In the case of pDOS (i.e. when running projwfc.x) if you do not
>> collect
>> > wfcs you should run with the same # of processors and the same pool
>> > parallelism. Otherwise, as you experienced, you get a davcio error
>> > (some of the wfcs files were not found).
>> > According to me, anyway, projwfc should work with pools if the above
>> > conditions are fulfilled.
>> >
>> > hope it helps
>> > andrea
>> >
>> >
>> >>
>> >>  There have been many discussions in this forum about parallelization
>> of
>> >> DOS and projected DOS codes. However, some of the points were not
>> clear
>> >> to
>> >> me, so I have performed some scf calculations for a very simple
>> system
>> >> using 8 processors and with flag wf_collect 'false'. I have performed
>> >> same
>> >> calculations once without use of pools (A) and then using 2 pools
>> (B).
>> >> Then using theses wavefunctions I have performed DOS and PDOS
>> >> calculations. Following is the summary of results and my
>> interpretation:
>> >>
>> >> i) Pool parallelization is not implemented for both these codes.
>> >>
>> >> ii) DOS calculations: for both the cases A and B, one can calculate
>> DOS
>> >> with the same no. of processors (= 8) and the results match within
>> >> numerical errors; even though for case B, the wavefunctions were
>> >> obtained
>> >> with pooling and DOS without it.
>> >>  For case A, even if I use 4 processors I get identical results as
>> when
>> >> I
>> >> use 8 processors. (Note, I have not checked with less no. of proc.s
>> for
>> >> B).
>> >> => The condition that we need the same no. of processors and pools as
>> >> were
>> >> used in scf calculations is not necessary; and it is possible to get
>> DOS
>> >> results even with wavefunctions generated with pool parallelization.
>> >>
>> >> iii) PDOS calculations: This can be calculated only in case of A and
>> >> using
>> >> the same no. of processors. If I use wavefunctions generated in case
>> B
>> >> or
>> >> less no. of processors (= 4) with A wavefn.s then I get "davcio"
>> error.
>> >> =>
>> >> For PDOS calculations, one cannot use wavefunctions generated with
>> pool
>> >> parallelization unless wf_collect flag is set to 'true' for scf
>> calc.s;
>> >> also one has to use the same no. of proc.s as were used for scf
>> calc.s
>> >> to
>> >> get projected DOS.
>> >>
>> >> Now my question is are these interpretations correct? Or they may
>> change
>> >> for some other system?? Do I need to do some more checks to ascertain
>> >> these? If yes, what sort of tests?
>> >>
>> >> Thanks for reading this long mail patiently, but I need to clarify on
>> >> these points before I can start with bigger systems.
>> >> Sincerely,
>> >> Madhura.
>> >>
>> >>
>> >>
>> >
>> > --
>> > Andrea Ferretti
>> > MIT, Dept Material Science & Engineering
>> > bldg 13-4078, 77, Massachusetts Ave, Cambridge, MA
>> > Tel: +1 617-452-2455;  Skype: andrea_ferretti
>> > URL: http://quasiamore.mit.edu
>> >
>> > Please, if possible, don't send me MS Word or PowerPoint attachments
>> > Why? See:  http://www.gnu.org/philosophy/no-word-attachments.html
>> >
>> >
>> > _______________________________________________
>> > Pw_forum mailing list
>> > Pw_forum at pwscf.org
>> > http://www.democritos.it/mailman/listinfo/pw_forum
>> >
>>
>>
>>
>
> --
> Andrea Ferretti
> MIT, Dept Material Science & Engineering
> bldg 13-4078, 77, Massachusetts Ave, Cambridge, MA
> Tel: +1 617-452-2455;  Skype: andrea_ferretti
> URL: http://quasiamore.mit.edu
>
> Please, if possible, don't send me MS Word or PowerPoint attachments
> Why? See:  http://www.gnu.org/philosophy/no-word-attachments.html
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
>


-- 
Madhura Marathe,
PhD student, TSU,
JNCASR, Bangalore.
India.
Phone No: +91-80-22082835



More information about the users mailing list