[Pw_forum] Qe-6.1 giving different energy values on different PCs

Lorenzo Paulatto lorenzo.paulatto at impmc.upmc.fr
Sun Jul 30 18:58:31 CEST 2017


On Jul 30, 2017 18:13, "Rajesh" <creativeidleminds at gmail.com> wrote:

Dear Dr Paulatto
After careful evaluation, I found some message like "Maximum cpu time
exceeded". Is it the reason for premature stopping of simulations? I have
attached last part of my output



Yes,
Kind regards



 iteration #  1     ecut=    30.00 Ry     beta=0.70
     CG style diagonalization
     c_bands:  2 eigenvalues not converged
     c_bands:  4 eigenvalues not converged
     c_bands:  1 eigenvalues not converged
     c_bands:  4 eigenvalues not converged
     c_bands:  1 eigenvalues not converged
     c_bands:  4 eigenvalues not converged
     c_bands:  5 eigenvalues not converged
     c_bands:  5 eigenvalues not converged
     c_bands:  1 eigenvalues not converged

     Maximum CPU time exceeded

     max_seconds     =   86400.00
     elapsed seconds =   86714.24
     Calculation stopped in k-point loop, point #    10
     Calculation stopped in scf loop at iteration #     0

     Writing output data file BNH3_STW.save

     init_run     :    158.44s CPU    227.15s WALL (       1 calls)
     electrons    :  60209.16s CPU  84809.71s WALL (       5 calls)
     update_pot   :     52.88s CPU     57.17s WALL (       4 calls)
     forces       :    595.14s CPU    688.06s WALL (       4 calls)
     stress       :    826.83s CPU    919.93s WALL (       4 calls)

     Called by init_run:
     wfcinit      :    145.78s CPU    212.19s WALL (       1 calls)
     wfcinit:wfcr :    144.95s CPU    211.35s WALL (      20 calls)
     potinit      :      1.35s CPU      2.59s WALL (       1 calls)

     Called by electrons:
     c_bands      :  58214.37s CPU  81913.52s WALL (      36 calls)
     sum_band     :   1897.42s CPU   2792.40s WALL (      35 calls)
     v_of_rho     :     25.12s CPU     41.02s WALL (      40 calls)
     v_h          :      2.46s CPU      4.36s WALL (      40 calls)
     v_xc         :     27.15s CPU     43.89s WALL (      48 calls)
     newd         :     77.36s CPU     88.65s WALL (      40 calls)
     mix_rho      :      4.63s CPU      7.61s WALL (      35 calls)

     Called by c_bands:
     init_us_2    :     30.88s CPU     31.00s WALL (    1590 calls)
     ccgdiagg     :  51949.04s CPU  73007.34s WALL (     916 calls)
     wfcrot       :   6372.47s CPU   9043.56s WALL (     846 calls)

     Called by sum_band:
     sum_band:bec :      1.40s CPU      1.40s WALL (     700 calls)
     addusdens    :     89.68s CPU     99.21s WALL (      35 calls)

     Called by *cgdiagg:
     h_psi        :  27157.32s CPU  38443.52s WALL (  722577 calls)
     s_psi        :  11265.84s CPU  11329.93s WALL ( 1444308 calls)
     cdiaghg      :     39.41s CPU     46.63s WALL (     846 calls)

     Called by h_psi:
     h_psi:pot    :  27118.42s CPU  38404.08s WALL (  722577 calls)
     h_psi:calbec :   9964.93s CPU  13225.79s WALL (  722577 calls)
     vloc_psi     :  11130.60s CPU  19108.08s WALL (  722577 calls)
     add_vuspsi   :   6018.79s CPU   6065.72s WALL (  722577 calls)
     h_1psi       :  28827.80s CPU  38539.59s WALL (  721731 calls)

     General routines
     calbec       :  19693.29s CPU  26378.43s WALL ( 1445408 calls)
     fft          :     48.26s CPU     94.01s WALL (     725 calls)
     ffts         :      1.25s CPU      2.49s WALL (      75 calls)
     fftw         :   9871.72s CPU  18370.16s WALL ( 1790302 calls)
     interpolate  :      5.62s CPU     10.36s WALL (      75 calls)
     davcio       :      0.00s CPU      0.22s WALL (      20 calls)

     Parallel routines
     fft_scatter  :   7695.82s CPU  16215.42s WALL ( 1791102 calls)

     PWSCF        :     0d   17h10m CPU        1d    0h 5m WALL


   This run was terminated on:  21:35:26  28Jul2017

=-----------------------------------------------------------
-------------------=
   JOB DONE.
=-----------------------------------------------------------
-------------------=
-------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status,
thus causing
the job to be terminated. The first process to do so was:

  Process name: [[10099,1],10]
  Exit code:    2
--------------------------------------------------------------------------


On Sun, Jul 30, 2017 at 9:33 PM, Lorenzo Paulatto <
lorenzo.paulatto at impmc.upmc.fr> wrote:

> Dear Rajesh,
> if you want to have a meaningful answer you need to provide some useful
> information. At the very least, the full output in all the different cases.
>
> Kind regards
>
> --
> Lorenzo Paulatto
> Written on a virtual keyboard with real fingers
>
> On Jul 30, 2017 05:11, "Rajesh" <creativeidleminds at gmail.com> wrote:
>
> Dear Users
> I ran same input script (vc-relax) on different PCs on different number of
> processors (24 and 70). But energy values I am getting are different. On 24
> cpus its higher than with 70 cpus. On 70 cpus number of cycles a simulation
> runs are higher than that of 24 cpus. At the end of output (24 cpus PC) I
> get Job done. Is it really the job is completed? WHy this is happening? Is
> the job finished prematurely?
>
>
> Thank you.
>
>
> Rajesh
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>


_______________________________________________
Pw_forum mailing list
Pw_forum at pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20170730/cdc062e8/attachment.html>


More information about the users mailing list