[Pw_forum] projwfc.x crashes for large cases

Karim Elgammal egkarim at gmail.com
Fri Oct 27 00:40:00 CEST 2017


up to my knowledge internal parallelization is not implemented yet projwfc

and yeah, I do believe sufficient memory is quite important here.

Karim Elgammal
KTH

On Thu, Oct 26, 2017 at 2:14 PM, Martin Gmitra <martin.gmitra at gmail.com>
wrote:

> Dear QE users,
>
> I would like to calculate partial charges for band structure using
> projwfc.x. The sequence for a converged charge density is: pw.x (bands) and
> projwfc.x.
>
> I am facing crashing problem of projwfc.x in large case of 100 k-point in
> the k-path. The projwfc.x ends just with the output:
>
> Calling projwave_nc ....
>
> and mpi error: ERROR: 0031-250  task 563: Killed
>
> Is there a possibility to get more output information what causes the
> crash? (Could it be memory issue?)
>
> I am using the same parallel input options with pools (18 as there where
> 18 k-points in SCF calculations), ntg and ndiag as follows:
>
> mpiexec -np $NCPU pw.x -npool 18 -ntg 4 -ndiag 32 -input pw-bands.in
> mpiexec -np $NCPU projwfc.x -npool 18 -ntg 4 -ndiag 32 -input projwfc.in
>
> Importantly, as a test, the same sequence, pw.x (bands) and projwfc.x
> works fine just for 3 kpoints. There I had different parallel options:
> -npool 3 -ndiag 32
>
> Few more details below. Many thanks in advance for any hint,
> Martin Gmitra
> Uni Regensburg, Germany
>
>
> For the calculations I am using relativistic USPP with tefield and
> dipfield =.true., input for projwfc.x:
>
> &projwfc
>     prefix = 'ppr',
>     outdir = '/scratch/' ,
>     lsym = .FALSE.,
>     filproj = 'projwfc.dat'
>  /
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>



-- 
Thank you and Best Regards;
Yours;
*Karim Elgammal*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20171027/24b736c6/attachment.html>


More information about the users mailing list