[Pw_forum] GIPAW Problem (Cholesky)

Vic Bermudez c.h.bermudez at cox.net
Wed Nov 11 23:22:08 CET 2015


Hello,

	I'm trying to use GIPAW for the first time and encountering a problem that
isn't discussed at the users' forum. I'm using GIPAW vers. v.5.0.2 (svn rev.
9392), and I get this error message soon after execution starts:

%%%%%%%%%%%%%%%%
     Error in routine  cdiaghg (2785):
      problems computing cholesky
 %%%%%%%%%%%%%%%%
     stopping ...

						Here's my GIPAW input file:
************************
&INPUTGIPAW
job='nmr',
tmp_dir='/lustre/cmf/scratch/b/bermudez/XX_166928/',
iverbosity=1,
restart_mode='from_scratch',
max_seconds=259000.0,
use_nmr_macroscopic_shape=.false.
/
***********************

			And here's the GIPAW output right up to the point where the error message
occurs:
***********************
Parallel version (MPI), running on    64 processors
     R & G space division:  proc/nbgrp/npool/nimage =      64

   Info: using nr1, nr2, nr3 values from input

   Info: using nr1s, nr2s, nr3s values from input

     IMPORTANT: XC functional enforced from input :
     Exchange-correlation      =  SLA  PW   PBX  PBC ( 1 4 3 4 0)
     EXX-fraction              =        0.00
     Any further DFT definition will be discarded
     Please, verify this is what you really want

               file Zr.pbe-spn-kjpaw_psl.0.2.3.UPF: wavefunction(s)  4D
renormalized
               file O.pbe-n-kjpaw_psl.0.1.UPF: wavefunction(s)  2P
renormalized
               file H.pbe-kjpaw_psl.0.1.UPF: wavefunction(s)  1S
renormalized

     Parallelization info
     --------------------
     sticks:   dense  smooth     PW     G-vecs:    dense   smooth      PW
     Min         382     153     38               138228    34949    4368
     Max         383     154     39               138247    35006    4379
     Sum       24491    9807   2451              8847271  2238233  279837


     Check: negative/imaginary core charge=   -0.000001    0.000000

     negative rho (up, down):  0.128E-01 0.000E+00

     Subspace diagonalization in iterative solution of the eigenvalue
problem:
     scalapack distributed-memory algorithm (size of sub-group:  5*  5
procs)

     init_paw_1: ntyp= 1  rc=    1.6000  rs=    1.0667
     init_paw_1: ntyp= 1  rc=    1.6000  rs=    1.0667
     init_paw_1: ntyp= 1  rc=    1.7000  rs=    1.1333
     init_paw_1: ntyp= 1  rc=    1.7000  rs=    1.1333
     init_paw_1: ntyp= 1  rc=    1.9000  rs=    1.2667
     init_paw_1: ntyp= 1  rc=    1.9000  rs=    1.2667

     init_gipaw_1: projectors nearly linearly dependent:
     ntyp =  1, l/n1/n2 =  2 2 1  0.99876687
     init_paw_1: ntyp= 2  rc=    1.3500  rs=    0.9000
     init_paw_1: ntyp= 2  rc=    1.3500  rs=    0.9000
     init_paw_1: ntyp= 2  rc=    1.3500  rs=    0.9000
     init_paw_1: ntyp= 2  rc=    1.3500  rs=    0.9000
     init_gipaw_1: projectors nearly linearly dependent:
     ntyp =  2, l/n1/n2 =  0 2 1  0.99100135
     init_gipaw_1: projectors nearly linearly dependent:
     ntyp =  2, l/n1/n2 =  1 2 1  0.99790100
     init_paw_1: ntyp= 3  rc=    1.0000  rs=    0.6667
     init_paw_1: ntyp= 3  rc=    1.0000  rs=    0.6667
     init_gipaw_1: projectors nearly linearly dependent:
     ntyp =  3, l/n1/n2 =  0 2 1  0.99968852

     Message from routine gipaw_setup:
     ***** implemented only for insulators *****

     GIPAW job: nmr
     NMR macroscopic correction: no

     GIPAW        :  0m54.34s CPU     1m 1.81s WALL

     Computing the magnetic susceptibility     isolve=0    ethr=0.1000E-13
     Starting from scratch
     k-point #    1 of     1      pool #  1
***********************

I notice that there are statements like "projectors nearly linearly
dependent". I'm not sure what this means, but it can't be good. I should
also note that this calculation is being done for a rather large system
(1280 electrons). I would greatly appreciate any guidance. Thank you in
advance.

Best Wishes,
Vic Bermudez

Victor M. Bermudez
E-mail: bermudez at alum.mit.edu





More information about the users mailing list