[Pw_forum] Error "Not diagonalizing because representation xx is not done" in "image" parallelization by ph.x

Ye Luo xw111luoye at gmail.com
Wed May 4 07:41:30 CEST 2016


Hi Coiby,

"it seems to be one requirement to let ph.x and pw.x have the same number
of processors."
This is not true.

If you are using image parallelization in your phonon calculation, you need
to maintain the same amount of processes per image as your pw calculation.
In this way, wf_collect=.true. is not needed.

Here is an example. I assume you use k point parallelization (-nk).
1, mpirun -np 48 pw.x -nk 12 -inp your_pw.input
2, mpirun -np 192 ph.x -ni 4 -nk 12 -inp your_ph.input
In this step, you might notice "Not diagonalizing because representation
xx is not done" which is normal.
The code should not abort because of this.
3, After calculating all the representations belongs a given q or q-mesh.
    Just add "recover = .true."  in your_ph.input and run
    mpirun -np 48 ph.x -nk  12 -inp your_ph.input
    The dynamical matrix will be computed for that q.

If you are confident with threaded pw.x, ph.x also gets benefit from
threaded MKL and FFT and the time to solution is further reduced.

For more details, you can look into PHonon/examples/Image_example.

P.S.
Your affiliation is missing.

===================
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory

2016-05-03 22:33 GMT-05:00 Coiby Xu <coiby.xu at gmail.com>:

> Dear Quantum Espresso Developers and Users,
>
>
> I'm running a phonon calculation parallelizing over the representations/q
> vectors. For my cluster, there are 24 cores per node. I want to use as many
> nodes as possible to speed up the calculation.
>
> I set the number of parallelizations to be the number of nodes,
>
>> mpirun -np NUMBER_OF_NODESx24  ph.x -nimage NUMBER_OF_NODES
>>
>
>
> If I only use 4 nodes (4 images), 8 nodes ( 8 images), the calculation
> will be finished successfully. However, more than 8 nodes, say 16 or 32
> nodes, are used, each time running the calculation, such error will be
> given,
>
>> Not diagonalizing because representation  xx is not done
>>
>
> Btw, I want to reduce I/O overhead by discarding `wf_collect` option, but
> the following way doesn't work (the number of processors and pools for scf
> calculation is the same to that in phonon calculation)
>
> mpirun -np NUMBER_OF_NODESx24  pw.x
>>
>
> ph.x complains,
>
>> Error in routine phq_readin (1):pw.x run with a different number of
>> processors.
>> Use wf_collect=.true.
>>
>
> The beginning output of pw.x,
>
>>     Parallel version (MPI), running on    96 processors
>>      R & G space division:  proc/nbgrp/npool/nimage =      96
>>      Waiting for input...
>>      Reading input from standard input
>>
>
> and the beginning output of ph.x,
>
>>  Parallel version (MPI), running on    96 processors
>>      path-images division:  nimage    =       4
>>      R & G space division:  proc/nbgrp/npool/nimage =      24
>>
>
> Do I miss something? I know it's inefficient to let pw.x use so many
> processors, but it seems to be one requirement to let ph.x and pw.x have
> the same number of processors.
>
> Thank you!
>
> --
> *Best regards,*
> *Coiby*
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20160504/d1c94165/attachment.html>


More information about the users mailing list