I<br>
<div class="gmail_quote">2010/3/3 Lorenzo Paulatto <span dir="ltr"><<a href="mailto:paulatto@sissa.it">paulatto@sissa.it</a>></span><br>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<div class="im">On Wed, 03 Mar 2010 14:28:47 +0100, Wei Zhou <<a href="mailto:zdw2000@gmail.com">zdw2000@gmail.com</a>> wrote:<br>> if I add the cell_factor =3.0 to the &CELL, then the error became<br>><br>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br>> task # 2<br>> from electrons : error # 1<br>> charge is wrong<br>> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br>
<br></div>This is strange, I was able to run your input more or less without any<br>problem (the bfgs algorithm had some problems toward the end of the<br>minimization, but unrelated to the charge one). You may try to disable<br>
charge density extrapolation (check in the manual, I don't remember the<br>keyword) to be safer. I think you have encountered some kind of<br>library-specific bug, or similar, can you please provide the full output,<br>
as well as the make.sys file so that I can have a look? Finally, are you<br>using pools or openmp parallelization?<br>
<div>
<div></div>
<div class="h5"><br>regards<br><br>--OUTPUT<br>WARNING: Unable to read mpd.hosts or list of hosts isn't provided. MPI job will be run on the current machine only.</div>
<div class="h5"> Program PWSCF v.4.1.2 starts ...<br> Today is 3Mar2010 at 22:19:18 </div>
<div class="h5"> Parallel version (MPI)</div>
<div class="h5"> Number of processors in use: 4<br> R & G space division: proc/pool = 4</div>
<div class="h5"> For Norm-Conserving or Ultrasoft (Vanderbilt) Pseudopotentials or PAW</div>
<div class="h5"> Current dimensions of program pwscf are:<br> Max number of different atomic species (ntypx) = 10<br> Max number of k-points (npk) = 40000<br> Max angular momentum in pseudopotentials (lmaxx) = 3<br>
Waiting for input...</div>
<div class="h5"> Subspace diagonalization in iterative solution of the eigenvalue problem:<br> a parallel distributed memory algorithm will be used,<br> eigenstates matrixes will be distributed block like on<br>
ortho sub-group = 2* 2 procs</div>
<div class="h5"><br> Planes per process (thick) : nr3 = 50 npp = 13 ncplane = 1024<br> Planes per process (smooth): nr3s= 30 npps= 8 ncplanes= 324<br> <br> Proc/ planes cols G planes cols G columns G<br>
Pool (dense grid) (smooth grid) (wavefct grid)<br> 1 13 162 5256 8 53 1009 19 195<br> 2 13 163 5261 8 52 990 18 192<br> 3 12 162 5254 7 53 1003 18 192<br>
4 12 162 5254 7 53 993 18 194<br> tot 50 649 21025 30 211 3995 73 773<br> </div>
<div class="h5"><br> bravais-lattice index = 4<br> lattice parameter (a_0) = 5.6000 a.u.<br> unit-cell volume = 238.9940 (a.u.)^3<br> number of atoms/cell = 2<br>
number of atomic types = 1<br> number of electrons = 20.00<br> number of Kohn-Sham states= 30<br> kinetic-energy cutoff = 25.0000 Ry<br> charge density cutoff = 300.0000 Ry<br>
convergence threshold = 1.0E-08<br> mixing beta = 0.7000<br> number of iterations used = 8 plain mixing<br> Exchange-correlation = SLA PW PBE PBE (1434)<br>
nstep = 100</div>
<div class="h5"><br> celldm(1)= 5.600000 celldm(2)= 0.000000 celldm(3)= 1.571420<br> celldm(4)= 0.000000 celldm(5)= 0.000000 celldm(6)= 0.000000</div>
<div class="h5"> crystal axes: (cart. coord. in units of a_0)<br> a(1) = ( 1.000000 0.000000 0.000000 ) <br> a(2) = ( -0.500000 0.866025 0.000000 ) <br> a(3) = ( 0.000000 0.000000 1.571420 ) </div>
<div class="h5"> reciprocal axes: (cart. coord. in units 2 pi/a_0)<br> b(1) = ( 1.000000 0.577350 0.000000 ) <br> b(2) = ( 0.000000 1.154701 0.000000 ) <br> b(3) = ( 0.000000 0.000000 0.636367 ) </div>
<div class="h5"><br> PseudoPot. # 1 for Ba read from file Ba.pbe-nsp-van.UPF<br> Pseudo is Ultrasoft + core correction, Zval = 10.0<br> Generated by new atomic code, or converted to UPF format<br> Using radial grid of 907 points, 6 beta functions with: <br>
l(1) = 0<br> l(2) = 0<br> l(3) = 1<br> l(4) = 1<br> l(5) = 2<br> l(6) = 2<br> Q(r) pseudized with 6 coefficients, rinner = 1.200 1.200 1.200<br>
1.200 1.200</div>
<div class="h5"> atomic species valence mass pseudopotential<br> Ba 10.00 137.32700 Ba( 1.00)</div>
<div class="h5"> 24 Sym.Ops. (with inversion)</div>
<div class="h5"><br> Cartesian axes</div>
<div class="h5"> site n. atom positions (a_0 units)<br> 1 Ba tau( 1) = ( 0.0000000 0.5773503 0.3928550 )<br> 2 Ba tau( 2) = ( 0.5000000 0.2886751 1.1785650 )</div>
<div class="h5"> number of k points= 120 gaussian broad. (Ry)= 0.0120 ngauss = 1<br> cart. coord. in units 2pi/a_0<br> k( 1) = ( 0.0000000 0.0000000 0.0000000), wk = 0.0012755<br>
k( 2) = ( 0.0000000 0.0000000 0.0795459), wk = 0.0025510<br> k( 3) = ( 0.0000000 0.0000000 0.1590918), wk = 0.0025510<br> ................................<br> k( 116) = ( 0.2857143 0.5773503 0.0000000), wk = 0.0076531<br>
k( 117) = ( 0.2857143 0.5773503 0.0795459), wk = 0.0153061<br> k( 118) = ( 0.2857143 0.5773503 0.1590918), wk = 0.0153061<br> k( 119) = ( 0.2857143 0.5773503 0.2386377), wk = 0.0153061<br>
k( 120) = ( 0.2857143 0.5773503 -0.3181836), wk = 0.0076531</div>
<div class="h5"> G cutoff = 238.3074 ( 21025 G-vectors) FFT grid: ( 32, 32, 50)<br> G cutoff = 79.4358 ( 3995 G-vectors) smooth grid: ( 18, 18, 30)</div>
<div class="h5"> Largest allocated arrays est. size (Mb) dimensions<br> Kohn-Sham Wavefunctions 0.06 Mb ( 135, 30)<br> NL pseudopotentials 0.07 Mb ( 135, 36)<br>
Each V/rho on FFT grid 0.20 Mb ( 13312)<br> Each G-vector array 0.04 Mb ( 5256)<br> G-vector shells 0.04 Mb ( 5256)<br> Largest temporary arrays est. size (Mb) dimensions<br>
Auxiliary wavefunctions 0.25 Mb ( 135, 120)<br> Each subspace H/S matrix 0.22 Mb ( 120, 120)<br> Each <psi_i|beta_j> matrix 0.02 Mb ( 36, 30)<br> Arrays for rho mixing 1.62 Mb ( 13312, 8)</div>
<div class="h5"> Initial potential from superposition of free atoms</div>
<div class="h5"> starting charge 19.97000, renormalised to 20.00000<br> Starting wfc are 26 atomic + 4 random wfc</div>
<div class="h5"> total cpu time spent up to now is 1.85 secs</div>
<div class="h5"> per-process dynamical memory: 5.3 Mb</div>
<div class="h5"> Self-consistent Calculation</div>
<div class="h5"> iteration # 1 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 1.00E-02, avg # of iterations = 5.2</div>
<div class="h5"> Threshold (ethr) on eigenvalues was too large:<br> Diagonalizing with lowered threshold</div>
<div class="h5"> Davidson diagonalization with overlap<br> ethr = 7.21E-04, avg # of iterations = 1.1</div>
<div class="h5"> total cpu time spent up to now is 16.78 secs</div>
<div class="h5"> total energy = -180.05547275 Ry<br> Harris-Foulkes estimate = -180.15409713 Ry<br> estimated scf accuracy < 0.14546748 Ry</div>
<div class="h5"> iteration # 2 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 7.27E-04, avg # of iterations = 2.6</div>
<div class="h5"> total cpu time spent up to now is 22.28 secs</div>
<div class="h5"> total energy = -180.08201004 Ry<br> Harris-Foulkes estimate = -180.08310411 Ry<br> estimated scf accuracy < 0.00157673 Ry</div>
<div class="h5"> iteration # 3 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 7.88E-06, avg # of iterations = 6.2</div>
<div class="h5"> total cpu time spent up to now is 34.75 secs</div>
<div class="h5"> total energy = -180.08234243 Ry<br> Harris-Foulkes estimate = -180.08239032 Ry<br> estimated scf accuracy < 0.00008663 Ry</div>
<div class="h5"> iteration # 4 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 4.33E-07, avg # of iterations = 2.2</div>
<div class="h5"> total cpu time spent up to now is 39.36 secs</div>
<div class="h5"> total energy = -180.08234940 Ry<br> Harris-Foulkes estimate = -180.08234952 Ry<br> estimated scf accuracy < 0.00000068 Ry</div>
<div class="h5"> iteration # 5 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 3.38E-09, avg # of iterations = 2.1</div>
<div class="h5"> total cpu time spent up to now is 44.28 secs</div>
<div class="h5"> total energy = -180.08234965 Ry<br> Harris-Foulkes estimate = -180.08234968 Ry<br> estimated scf accuracy < 0.00000010 Ry</div>
<div class="h5"> iteration # 6 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 5.00E-10, avg # of iterations = 2.0</div>
<div class="h5"> total cpu time spent up to now is 49.06 secs</div>
<div class="h5"> End of self-consistent calculation</div>
<div class="h5"> k = 0.0000 0.0000 0.0000 ( 525 PWs) bands (ev):</div>
<div class="h5"> -7.5077 -4.2553 4.8253 12.5286 12.5286 14.2044 14.2044 14.8220<br> 21.7352 23.9498 25.7465 25.7465 26.0251 27.1761 27.5311 27.5311<br> 27.5552 27.5552 27.9377 28.2291 28.2291 28.3829 31.2562 36.6993<br>
36.6993 39.3932 39.3933 41.4102 41.4102 41.6156</div>
<div class="h5"> .............................</div>
<div class="h5"> -4.0241 -3.7851 6.4145 6.9382 7.1591 9.1534 11.1024 11.6687<br> 20.8019 24.5531 24.8598 26.0962 27.5308 27.8333 28.2100 28.6546<br> 30.4432 30.5194 33.3446 34.2433 34.3662 35.4987 36.0440 37.0388<br>
39.6977 39.8126 40.1051 40.2970 41.0627 41.7419</div>
<div class="h5"> k = 0.2857 0.5774 0.2386 ( 505 PWs) bands (ev):</div>
<div class="h5"> -3.9413 -3.8102 6.1676 6.3855 7.4222 8.5510 12.1481 12.4976<br> 22.0663 23.3220 23.5590 25.4455 26.5093 27.4481 28.4395 28.5762<br> 29.6446 29.7275 34.8198 35.9627 36.0383 36.8640 37.3254 37.5860<br>
39.2276 39.6133 39.8864 39.8885 41.1669 41.2481</div>
<div class="h5"> k = 0.2857 0.5774-0.3182 ( 502 PWs) bands (ev):</div>
<div class="h5"> -3.8637 -3.8637 6.1211 6.1211 7.9395 7.9395 12.7061 12.7061<br> 22.8290 22.8290 23.7925 23.7925 26.8634 26.8634 28.8438 28.8438<br> 29.1672 29.1672 36.0364 36.0364 37.1475 37.1475 37.9938 37.9938<br>
38.8055 38.8055 39.9043 39.9043 41.2094 41.2094</div>
<div class="h5"> the Fermi energy is 23.2963 ev</div>
<div class="h5">! total energy = -180.08234966 Ry<br> Harris-Foulkes estimate = -180.08234967 Ry<br> estimated scf accuracy < 9.8E-09 Ry</div>
<div class="h5"> The total energy is the sum of the following terms:</div>
<div class="h5"> one-electron contribution = 31.54953500 Ry<br> hartree contribution = 3.68728415 Ry<br> xc contribution = -98.07090733 Ry<br> ewald contribution = -117.24829605 Ry<br>
smearing contrib. (-TS) = 0.00003456 Ry</div>
<div class="h5"> convergence has been achieved in 6 iterations</div>
<div class="h5"> Forces acting on atoms (Ry/au):</div>
<div class="h5"> atom 1 type 1 force = 0.00000000 0.00000000 0.00000000<br> atom 2 type 1 force = 0.00000000 0.00000000 0.00000000</div>
<div class="h5"> Total force = 0.000000 Total SCF correction = 0.000000</div>
<div class="h5"><br> entering subroutine stress ...</div>
<div class="h5"> total stress (Ry/bohr**3) (kbar) P= 356.42<br> 0.00262856 0.00000000 0.00000000 386.67 0.00 0.00<br> 0.00000000 0.00262856 0.00000000 0.00 386.67 0.00<br>
0.00000000 0.00000000 0.00201147 0.00 0.00 295.90</div>
<div class="h5"><br> BFGS Geometry Optimization</div>
<div class="h5"> number of scf cycles = 1<br> number of bfgs steps = 0</div>
<div class="h5"> enthalpy new = -179.2700254431 Ry</div>
<div class="h5"> new trust radius = 0.2000000000 bohr<br> new conv_thr = 0.0000000100 Ry</div>
<div class="h5"> new unit-cell volume = 214.73402 a.u.^3 ( 31.82030 Ang^3 )</div>
<div class="h5">CELL_PARAMETERS (alat)<br> 0.972380845 0.000000000 0.000000000<br> -0.486190423 0.842106514 0.000000000<br> 0.000000000 0.000000000 1.493252878</div>
<div class="h5">ATOMIC_POSITIONS (crystal)<br>Ba 0.333333333 0.666666667 0.250000000<br>Ba 0.666666667 0.333333333 0.750000000</div>
<div class="h5"> </div>
<div class="h5"> Writing output data file ba.save<br> NEW-OLD atomic charge density approx. for the potential<br> NEW k-points:<br> k( 1) = ( 0.0000000 0.0000000 0.0000000), wk = 0.0012755<br>
k( 2) = ( 0.0000000 0.0000000 0.0837099), wk = 0.0025510<br> k( 3) = ( 0.0000000 0.0000000 0.1674197), wk = 0.0025510<br> k( 4) = ( 0.0000000 0.0000000 0.2511296), wk = 0.0025510<br>
.............................<br> k( 118) = ( 0.2938296 0.5937491 0.1674197), wk = 0.0153061<br> k( 119) = ( 0.2938296 0.5937491 0.2511296), wk = 0.0153061<br> k( 120) = ( 0.2938296 0.5937491 -0.3348395), wk = 0.0076531<br>
extrapolated charge 17.74385, renormalised to 20.00000</div>
<div class="h5"> total cpu time spent up to now is 51.69 secs</div>
<div class="h5"> per-process dynamical memory: 4.5 Mb</div>
<div class="h5"> Self-consistent Calculation</div>
<div class="h5"> iteration # 1 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 1.00E-06, avg # of iterations = 8.5</div>
<div class="h5"> total cpu time spent up to now is 67.75 secs</div>
<div class="h5"> total energy = -179.97041147 Ry<br> Harris-Foulkes estimate = -177.83374248 Ry<br> estimated scf accuracy < 0.03695251 Ry</div>
<div class="h5"> iteration # 2 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 1.85E-04, avg # of iterations = 4.8</div>
<div class="h5"> total cpu time spent up to now is 77.97 secs</div>
<div class="h5"> total energy = -180.02045962 Ry<br> Harris-Foulkes estimate = -180.03224157 Ry<br> estimated scf accuracy < 0.02356776 Ry</div>
<div class="h5"> iteration # 3 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 1.18E-04, avg # of iterations = 2.2</div>
<div class="h5"> total cpu time spent up to now is 82.47 secs</div>
<div class="h5"> total energy = -180.01971209 Ry<br> Harris-Foulkes estimate = -180.02210285 Ry<br> estimated scf accuracy < 0.00370170 Ry</div>
<div class="h5"> iteration # 4 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 1.85E-05, avg # of iterations = 4.1</div>
<div class="h5"> total cpu time spent up to now is 89.38 secs</div>
<div class="h5"> total energy = -180.02038207 Ry<br> Harris-Foulkes estimate = -180.02040479 Ry<br> estimated scf accuracy < 0.00004705 Ry</div>
<div class="h5"> iteration # 5 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 2.35E-07, avg # of iterations = 2.6</div>
<div class="h5"> total cpu time spent up to now is 94.79 secs</div>
<div class="h5"> total energy = -180.02038810 Ry<br> Harris-Foulkes estimate = -180.02039556 Ry<br> estimated scf accuracy < 0.00000846 Ry</div>
<div class="h5"> iteration # 6 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 4.23E-08, avg # of iterations = 2.0</div>
<div class="h5"> total cpu time spent up to now is 99.60 secs</div>
<div class="h5"> total energy = -180.02039034 Ry<br> Harris-Foulkes estimate = -180.02039133 Ry<br> estimated scf accuracy < 0.00000130 Ry</div>
<div class="h5"> iteration # 7 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 6.52E-09, avg # of iterations = 2.0</div>
<div class="h5"> total cpu time spent up to now is 104.41 secs</div>
<div class="h5"> total energy = -180.02039065 Ry<br> Harris-Foulkes estimate = -180.02039066 Ry<br> estimated scf accuracy < 0.00000002 Ry</div>
<div class="h5"> iteration # 8 ecut= 25.00 Ry beta=0.70<br> Davidson diagonalization with overlap<br> ethr = 8.60E-11, avg # of iterations = 1.7</div>
<div class="h5"> total cpu time spent up to now is 108.73 secs</div>
<div class="h5"> End of self-consistent calculation</div>
<div class="h5"> k = 0.0000 0.0000 0.0000 ( 525 PWs) bands (ev):</div>
<div class="h5"> -5.9352 -1.5838 6.3491 15.1342 15.1342 17.3993 17.3993 18.5458<br> 23.8348 25.7969 27.8380 28.1470 28.1470 29.7371 29.7371 30.2050<br> 30.2050 30.2872 30.2872 30.5813 31.5261 31.6805 34.0063 40.1211<br>
40.1211 43.4906 43.4906 44.4380 45.4547 45.5237</div>
<div class="h5"> ...........................<br> k = 0.2938 0.5937 0.1674 ( 505 PWs) bands (ev):</div>
<div class="h5"> -1.6133 -1.2795 8.1641 8.8019 8.9581 11.4601 13.7618 14.4344<br> 22.9370 27.5835 27.5919 28.8628 29.9563 30.2835 31.1011 31.1550<br> 33.1452 33.1820 36.6391 37.9313 38.2991 38.6433 39.1904 40.9201<br>
43.0673 43.0685 43.7739 43.9050 44.3872 44.5220</div>
<div class="h5"> k = 0.2938 0.5937 0.2511 ( 505 PWs) bands (ev):</div>
<div class="h5"> -1.4826 -1.2970 7.8249 8.0963 9.2964 10.6999 15.1033 15.5347<br> 24.4117 26.0624 26.3436 28.1432 28.8718 29.9518 30.9253 30.9892<br> 32.2222 32.2329 38.2110 39.9442 39.9559 40.1704 40.6617 41.2719<br>
42.9339 43.1192 43.1294 43.2778 44.4365 44.4925</div>
<div class="h5"> k = 0.2938 0.5937-0.3348 ( 502 PWs) bands (ev):</div>
<div class="h5"> -1.3659 -1.3659 7.7590 7.7590 9.9393 9.9393 15.8344 15.8344<br> 25.4841 25.4841 26.3126 26.3126 29.2729 29.2729 31.4118 31.4118<br> 31.5406 31.5406 39.6484 39.6484 41.0258 41.0258 41.4271 41.4271<br>
42.4194 42.4194 43.1321 43.1321 44.4583 44.4583</div>
<div class="h5"> the Fermi energy is 25.6489 ev</div>
<div class="h5">! total energy = -180.02039065 Ry<br> Harris-Foulkes estimate = -180.02039065 Ry<br> estimated scf accuracy < 1.0E-09 Ry</div>
<div class="h5"> The total energy is the sum of the following terms:</div>
<div class="h5"> one-electron contribution = 36.88903660 Ry<br> hartree contribution = 2.91419118 Ry<br> xc contribution = -98.35334426 Ry<br> ewald contribution = -121.47020511 Ry<br>
smearing contrib. (-TS) = -0.00006906 Ry</div>
<div class="h5"> convergence has been achieved in 8 iterations</div>
<div class="h5"> Forces acting on atoms (Ry/au):</div>
<div class="h5"> atom 1 type 1 force = 0.00000000 0.00000000 0.00000000<br> atom 2 type 1 force = 0.00000000 0.00000000 0.00000000</div>
<div class="h5"> Total force = 0.000000 Total SCF correction = 0.000000</div>
<div class="h5"><br> entering subroutine stress ...</div>
<div class="h5"> total stress (Ry/bohr**3) (kbar) P= 421.03<br> 0.00318745 0.00000000 0.00000000 468.89 0.00 0.00<br> 0.00000000 0.00318745 0.00000000 0.00 468.89 0.00<br>
0.00000000 0.00000000 0.00221141 0.00 0.00 325.31</div>
<div class="h5"><br> number of scf cycles = 2<br> number of bfgs steps = 1</div>
<div class="h5"> enthalpy old = -179.2700254431 Ry<br> enthalpy new = -179.2905244161 Ry</div>
<div class="h5"> CASE: enthalpy_new < enthalpy_old</div>
<div class="h5"> new trust radius = 0.6259677229 bohr<br> new conv_thr = 0.0000000010 Ry</div>
<div class="h5"> new unit-cell volume = 156.64138 a.u.^3 ( 23.21186 Ang^3 )</div>
<div class="h5">CELL_PARAMETERS (alat)<br> 0.917924413 0.000000000 0.000000000<br> -0.458962207 0.794945860 0.000000000<br> 0.000000000 0.000000000 1.222356706</div>
<div class="h5">ATOMIC_POSITIONS (crystal)<br>Ba 0.333333333 0.666666667 0.250000000<br>Ba 0.666666667 0.333333333 0.750000000</div>
<div class="h5"> </div>
<div class="h5"> Writing output data file ba.save</div>
<div class="h5"> first order wave-functions extrapolation<br> first order charge density extrapolation<br> NEW k-points:<br> k( 1) = ( 0.0000000 0.0000000 0.0000000), wk = 0.0012755<br> k( 2) = ( 0.0000000 0.0000000 0.1022615), wk = 0.0025510<br>
k( 3) = ( 0.0000000 0.0000000 0.2045230), wk = 0.0025510<br> ..............................<br> k( 97) = ( 0.2334459 0.4043402 0.1022615), wk = 0.0153061<br> k( 98) = ( 0.2334459 0.4043402 0.2045230), wk = 0.0153061<br>
k( 99) = ( 0.2334459 0.4043402 0.3067844), wk = 0.0153061<br> k( 100) = ( 0.2334459 0.4043402 -0.4090459), wk = 0.0076531</div>
<div class="h5"> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br> from scale_h : error # 1<br> Not enough space allocated for radial FFT: try restarting with a larger cell_factor.<br>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%</div>
<div class="h5"> stopping ...<br> k( 101) = ( 0.2334459 0.4941936 0.0000000), wk = 0.0153061<br> k( 102) = ( 0.2334459 0.4941936 0.1022615), wk = 0.0306122<br> k( 103) = ( 0.2334459 0.4941936 0.2045230), wk = 0.0306122<br>
k( 104) = ( 0.2334459 0.4941936 0.3067844), wk = 0.0306122<br> k( 105) = ( 0.2334459 0.4941936 -0.4090459), wk = 0.0153061<br> k( 106) = ( 0.2334459 0.5840470 0.0000000), wk = 0.0153061<br>
k( 107) = ( 0.2334459 0.5840470 0.1022615), wk = 0.0306122<br> k( 108) = ( 0.2334459 0.5840470 0.2045230), wk = 0.0306122<br> k( 109) = ( 0.2334459 0.5840470 0.3067844), wk = 0.0306122<br>
k( 110) = ( 0.2334459 0.5840470 -0.4090459), wk = 0.0153061<br> k( 111) = ( 0.3112612 0.5391203 0.0000000), wk = 0.0076531<br> k( 112) = ( 0.3112612 0.5391203 0.1022615), wk = 0.0153061<br>
k( 113) = ( 0.3112612 0.5391203 0.2045230), wk = 0.0153061<br> k( 114) = ( 0.3112612 0.5391203 0.3067844), wk = 0.0153061<br> k( 115) = ( 0.3112612 0.5391203 -0.4090459), wk = 0.0076531<br>
k( 116) = ( 0.3112612 0.6289737 0.0000000), wk = 0.0076531<br> k( 117) = ( 0.3112612 0.6289737 0.1022615), wk = 0.0153061<br> k( 118) = ( 0.3112612 0.6289737 0.2045230), wk = 0.0153061<br>
k( 119) = ( 0.3112612 0.6289737 0.3067844), wk = 0.0153061<br> k( 120) = ( 0.3112612 0.6289737 -0.4090459), wk = 0.0076531</div>
<div class="h5"> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br> from scale_h : error # 1<br> Not enough space allocated for radial FFT: try restarting with a larger cell_factor.<br>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%</div>
<div class="h5"> stopping ...</div>
<div class="h5"> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br> from scale_h : error # 1<br> Not enough space allocated for radial FFT: try restarting with a larger cell_factor.<br>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%</div>
<div class="h5"> stopping ...</div>
<div class="h5"> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br> from scale_h : error # 1<br> Not enough space allocated for radial FFT: try restarting with a larger cell_factor.<br>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%</div>
<div class="h5"> stopping ...<br>rank 3 in job 1 linux-solid_52345 caused collective abort of all ranks<br> exit status of rank 3: killed by signal 9 <br>rank 1 in job 1 linux-solid_52345 caused collective abort of all ranks<br>
exit status of rank 1: return code 0 <br>rank 0 in job 1 linux-solid_52345 caused collective abort of all ranks<br> exit status of rank 0: return code 0 <br></div></div></blockquote>
<div>############################################</div>
<div> </div>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<div>
<div class="h5">MAKE.SYS, I use the intel_mpi3.2.1.009,IFORT9.0, and I calculate the Ba structure at lower pressure ,it seem works well</div></div></blockquote>
<div># make.sys. Generated from <a href="http://make.sys.in">make.sys.in</a> by configure.</div>
<div># compilation rules</div>
<div>.SUFFIXES :<br>.SUFFIXES : .o .c .f .f90</div>
<div># most fortran compilers can directly preprocess c-like directives: use<br># $(MPIF90) $(F90FLAGS) -c $<<br># if explicit preprocessing by the C preprocessor is needed, use:<br># $(CPP) $(CPPFLAGS) $< -o $*.F90 <br>
# $(MPIF90) $(F90FLAGS) -c $*.F90 -o $*.o<br># remember the tabulator in the first column !!!</div>
<div>.f90.o:<br> $(MPIF90) $(F90FLAGS) -c $<</div>
<div># .f.o and .c.o: do not modify</div>
<div>.f.o:<br> $(F77) $(FFLAGS) -c $<</div>
<div>.c.o:<br> $(CC) $(CFLAGS) -c $<</div>
<div><br># DFLAGS = precompilation options (possible arguments to -D and -U)<br># used by the C compiler and preprocessor<br># FDFLAGS = as DFLAGS, for the f90 compiler<br># See include/defs.h.README for a list of options and their meaning<br>
# With the exception of IBM xlf, FDFLAGS = $(DFLAGS)<br># For IBM xlf, FDFLAGS is the same as DFLAGS with separating commas </div>
<div>DFLAGS = -D__INTEL -D__FFTW3 -D__USE_INTERNAL_FFTW -D__MPI -D__PARA<br>FDFLAGS = $(DFLAGS)</div>
<div># IFLAGS = how to locate directories where files to be included are<br># In most cases, IFLAGS = -I../include</div>
<div>IFLAGS = -I../include</div>
<div># MODFLAGS = flag used by f90 compiler to locate modules<br># You need to search for modules in ./, in ../iotk/src, in ../Modules<br># Some applications also need modules in ../PW and ../PH</div>
<div>MODFLAGS = -I./ -I../Modules -I../iotk/src \<br> -I../PW -I../PH -I../EE -I../GIPAW</div>
<div># Compilers: fortran-90, fortran-77, C<br># If a parallel compilation is desired, MPIF90 should be a fortran-90 <br># compiler that produces executables for parallel execution using MPI<br># (such as for instance mpif90, mpf90, mpxlf90,...);<br>
# otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)<br># If you have a parallel machine but no suitable candidate for MPIF90,<br># try to specify the directory containing "mpif.h" in IFLAGS<br>
# and to specify the location of MPI libraries in MPI_LIBS</div>
<div>MPIF90 = mpiifort<br>#F90 = ifort<br>CC = cc<br>F77 = ifort</div>
<div># C preprocessor and preprocessing flags - for explicit preprocessing, <br># if needed (see the compilation rules above)<br># preprocessing flags must include DFLAGS and IFLAGS</div>
<div>CPP = cpp<br>CPPFLAGS = -P -traditional $(DFLAGS) $(IFLAGS)</div>
<div># compiler flags: C, F90, F77<br># C flags must include DFLAGS and IFLAGS<br># F90 flags must include MODFLAGS, IFLAGS, and FDFLAGS with appropriate syntax</div>
<div>CFLAGS = -O3 $(DFLAGS) $(IFLAGS)<br>F90FLAGS = $(FFLAGS) -nomodule -fpp $(FDFLAGS) $(IFLAGS) $(MODFLAGS)<br>FFLAGS = -O2 -assume byterecl</div>
<div># compiler flags without optimization for fortran-77<br># the latter is NEEDED to properly compile dlamch.f, used by lapack</div>
<div>FFLAGS_NOOPT = -O0 -assume byterecl</div>
<div># Linker, linker-specific flags (if any)<br># Typically LD coincides with F90 or MPIF90, LD_LIBS is empty</div>
<div>LD = mpiifort<br>LDFLAGS = <br>LD_LIBS =</div>
<div># External Libraries (if any) : blas, lapack, fft, MPI</div>
<div># If you have nothing better, use the local copy : ../flib/blas.a</div>
<div>BLAS_LIBS = ../flib/blas.a</div>
<div># The following lapack libraries will be available in flib/ :<br># ../flib/lapack.a : contains all needed routines<br># ../flib/lapack_atlas.a: only routines not present in the Atlas library<br># For IBM machines with essl (-D__ESSL): load essl BEFORE lapack !<br>
# remember that LAPACK_LIBS precedes BLAS_LIBS in loading order</div>
<div>LAPACK_LIBS = ../flib/lapack.a</div>
<div># nothing needed here if the the internal copy of FFTW is compiled<br># (needs -D__FFTW in DFLAGS)</div>
<div>FFT_LIBS = -lfftw3 </div>
<div># For parallel execution, the correct path to MPI libraries must<br># be specified in MPI_LIBS (except for IBM if you use mpxlf)</div>
<div>MPI_LIBS = </div>
<div># IBM-specific: MASS libraries, if available and if -D__MASS is defined in FDFLAGS</div>
<div>MASS_LIBS = </div>
<div># pgplot libraries (used by some post-processing tools)</div>
<div>PGPLOT_LIBS = </div>
<div># ar command and flags - for most architectures: AR = ar, ARFLAGS = ruv<br># ARFLAGS_DYNAMIC is used in iotk to produce a dynamical library,<br># for Mac OS-X with PowerPC and xlf compiler. In all other cases<br># ARFLAGS_DYNAMIC = $(ARFLAGS)</div>
<div>AR = ar<br>ARFLAGS = ruv<br>ARFLAGS_DYNAMIC= ruv</div>
<div># ranlib command. If ranlib is not needed (it isn't in most cases) use<br># RANLIB = echo</div>
<div>RANLIB = ranlib</div>
<div># all internal and external libraries - do not modify</div>
<div>LIBOBJS = ../flib/ptools.a ../flib/flib.a ../clib/clib.a ../iotk/src/libiotk.a ../Multigrid/mglib.a<br>LIBS = $(LAPACK_LIBS) $(BLAS_LIBS) $(FFT_LIBS) $(MPI_LIBS) $(MASS_LIBS) $(PGPLOT_LIBS) $(LD_LIBS)</div>
<div> </div>
<div> </div>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<div>
<div class="h5">-- <br>ZhouDawei<br>JiLin Universiyt ,ChangChun ,China<br><a href="mailto:zdw2000@gmail.com">zdw2000@gmail.com</a><br></div></div></blockquote></div>