[Pw_forum] Benchmarking: openmpi-1.3 vs openmpi-1.2.8
Mahmoud Payami
mpayami at aeoi.org.ir
Sun Feb 15 11:54:45 CET 2009
Dear ALL,
I performed a benchmarking test using openmpi-1.2.8 and openmpi-1.3 for a
2-atom Au cluster.
The out files show that in all parts "1.3" takes shorter time than "1.2.8".
However, the wall time in "1.3" is slightly increased. I do not know how to
resolve it. Is the bottleneck the connection cables?
Bests,
Mahmoud Payami
----------------------------------------
#out_1.2.8
PWSCF : 2m 5.29s CPU time, 4m43.10s wall time
init_run : 9.50s CPU
electrons : 108.91s CPU
forces : 5.90s CPU
Called by init_run:
wfcinit : 1.58s CPU
potinit : 3.84s CPU
Called by electrons:
c_bands : 32.38s CPU ( 10 calls, 3.238 s avg)
sum_band : 29.74s CPU ( 10 calls, 2.974 s avg)
v_of_rho : 25.64s CPU ( 10 calls, 2.564 s avg)
newd : 12.15s CPU ( 10 calls, 1.215 s avg)
mix_rho : 5.26s CPU ( 10 calls, 0.526 s avg)
Called by c_bands:
init_us_2 : 0.12s CPU ( 21 calls, 0.006 s avg)
regterg : 32.10s CPU ( 10 calls, 3.210 s avg)
Called by *egterg:
h_psi : 31.51s CPU ( 43 calls, 0.733 s avg)
s_psi : 0.20s CPU ( 43 calls, 0.005 s avg)
g_psi : 0.06s CPU ( 32 calls, 0.002 s avg)
rdiaghg : 1.19s CPU ( 41 calls, 0.029 s avg)
Called by h_psi:
add_vuspsi : 0.17s CPU ( 43 calls, 0.004 s avg)
General routines
calbec : 2.26s CPU ( 57 calls, 0.040 s avg)
cft3s : 82.62s CPU ( 595 calls, 0.139 s avg)
interpolate : 9.35s CPU ( 20 calls, 0.467 s avg)
davcio : 0.00s CPU ( 9 calls, 0.000 s avg)
Parallel routines
fft_scatter : 64.88s CPU ( 595 calls, 0.109 s avg)
----------------------------------------------------------------------------------------
#out-1.3
PWSCF : 1m55.59s CPU time, 4m44.04s wall time
init_run : 8.69s CPU
electrons : 100.47s CPU
forces : 5.55s CPU
Called by init_run:
wfcinit : 1.46s CPU
potinit : 3.41s CPU
Called by electrons:
c_bands : 30.14s CPU ( 10 calls, 3.014 s avg)
sum_band : 27.11s CPU ( 10 calls, 2.711 s avg)
v_of_rho : 23.62s CPU ( 10 calls, 2.362 s avg)
newd : 11.95s CPU ( 10 calls, 1.195 s avg)
mix_rho : 4.47s CPU ( 10 calls, 0.447 s avg)
Called by c_bands:
init_us_2 : 0.10s CPU ( 21 calls, 0.005 s avg)
regterg : 29.87s CPU ( 10 calls, 2.987 s avg)
Called by *egterg:
h_psi : 29.42s CPU ( 43 calls, 0.684 s avg)
s_psi : 0.18s CPU ( 43 calls, 0.004 s avg)
g_psi : 0.06s CPU ( 32 calls, 0.002 s avg)
rdiaghg : 1.05s CPU ( 41 calls, 0.026 s avg)
Called by h_psi:
add_vuspsi : 0.16s CPU ( 43 calls, 0.004 s avg)
General routines
calbec : 1.63s CPU ( 57 calls, 0.029 s avg)
cft3s : 75.61s CPU ( 595 calls, 0.127 s avg)
interpolate : 8.18s CPU ( 20 calls, 0.409 s avg)
davcio : 0.00s CPU ( 9 calls, 0.000 s avg)
Parallel routines
fft_scatter : 57.22s CPU ( 595 calls, 0.096 s avg)
>
>> Has anybody succeeded running QE using openmpi-1.3?
>> I get the following error message: [...]
>> from read_namelists: error #1
>> reading namelist control [...]
>> On the other hand it is ok with openmpi-1.2.8.
>
> try pw.x/cp.x -inp "input file name" . It happens more often than
> not that MPI installations are confused by input redirection
> (pw.x/cp.x < "input file name"). Maybe your newer version is
> not installed in exactly the same way as your previous one
>
> Paolo
> ---
> Paolo Giannozzi, Democritos and University of Udine, Italy
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
>
>
More information about the users
mailing list