[QE-users] Parallelization and Input/Output of wfc in QE
Davide Ceresoli
davide.ceresoli at cnr.it
Fri Nov 25 12:47:47 CET 2022
Dear Riccardo,
the wfc*.dat files generated by QE in the scratch directory do
contain the actual plane wave coefficients, not just the square modulus.
The wfc coefficients are "collected", no need to do MPI stuff.
As Paolo Giannozzi pointed out, you can use pp_example.f90 or the routine
read_QE_wfc at:
https://github.com/Sassafrass6/PAOFLOW/blob/master/src/defs/do_atwfc_proj.py
Indeed, we were able to replicate exactly the behavior of projwfc.x in
pure python for further processing.
Best,
Davide
On 11/24/22 16:33, Riccardo Piombo uniroma1 wrote:
> Dear Prof. Giannozzi,
>
> As far as I know, QE doesn't allow downloading the wavefunctions on a .dat file,
> only their modulus squared. My implementation in the local_dos.f90 code roughly
> fills this gap by saving the coefficients of the
>
> plane waves expansion in k-space.
>
> Whenever I run pp.x on multiple nodes, what happens is that the same file (say
> wfc_g.dat or g_vectors.dat) is overwritten.
>
> For example, when I run a job on one node, my wfc_g.dat file is made of 500k
> rows. When I run the same job on two nodes, the file gets halved (250k rows),
> and so on, as the number of nodes increases.
>
> This problem concerns the parallelization of the job on multiple nodes since the
> file on which the wfc is written is always the same.
>
> Therefore, since I'm not into MPI and stuff like that, what syntax is to
> implement to avoid such a misprint?
>
>
> Regards,
>
> Riccardo Piombo
>
> Post doc researcher in Condensed Matter Physics at Sapienza University of Rome
>
--
+--------------------------------------------------------------+
Davide Ceresoli
CNR - Istituto di Scienze e Tecnologie Chimiche (SCITEC)
c/o University of Milan, via Golgi 19, 20133 Milan, Italy
Email: davide.ceresoli at cnr.it
Phone: +39-02-50314276, +39-347-1001570 (mobile)
Skype: dceresoli
Website: http://sites.google.com/site/dceresoli/
+--------------------------------------------------------------+
More information about the users
mailing list