<div dir="ltr"><div><div><div><div>Dear Vladislav.<br></div>The first problem has been fixed some time ago, try to take<br></div>more recent (or CVS) version ...<br></div>regards,<br></div>Alexander<br></div><div class="gmail_extra">
<br><br><div class="gmail_quote">2014-03-26 17:01 GMT+01:00 vborisov <span dir="ltr"><<a href="mailto:vborisov@mpi-halle.mpg.de" target="_blank">vborisov@mpi-halle.mpg.de</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Dear all,<br>
<br>
I noticed a problem while using the PWCOND code for calculating<br>
the transmission for more than one k-point within the two-dimensional<br>
BZ.<br>
Whereas there is no problem with the conductance calculation for<br>
a single k-point, the following error message appears once I provide<br>
a list with n>=2 points in the input file for conductance calculation.<br>
It is shown below on a standard example for an Al nanowire (same<br>
behavior<br>
also observed for different systems):<br>
<br>
<br>
forrtl: severe (151): allocatable array is already allocated<br>
Image PC Routine Line<br>
Source<br>
pwcond.x 0000000000BEC50D Unknown Unknown<br>
Unknown<br>
pwcond.x 0000000000BEB015 Unknown Unknown<br>
Unknown<br>
pwcond.x 0000000000B866D0 Unknown Unknown<br>
Unknown<br>
pwcond.x 0000000000B1EDCF Unknown Unknown<br>
Unknown<br>
pwcond.x 0000000000B5D7CB Unknown Unknown<br>
Unknown<br>
pwcond.x 000000000047E8C4 init_gper_.A 72<br>
init_gper.f90<br>
pwcond.x 000000000045CAB9 do_cond_.A 462<br>
do_cond.f90<br>
pwcond.x 000000000045388A MAIN__ 22<br>
condmain.f90<br>
pwcond.x 0000000000437FDC Unknown Unknown<br>
Unknown<br>
libc.so.6 00002B17BB725CDD Unknown Unknown<br>
Unknown<br>
pwcond.x 0000000000437ED9 Unknown Unknown<br>
Unknown<br>
forrtl: severe (151): allocatable array is already allocated<br>
Image PC Routine Line<br>
Source<br>
pwcond.x 0000000000BEC50D Unknown Unknown<br>
Unknown<br>
pwcond.x 0000000000BEB015 Unknown Unknown<br>
Unknown<br>
pwcond.x 0000000000B866D0 Unknown Unknown<br>
Unknown<br>
pwcond.x 0000000000B1EDCF Unknown Unknown<br>
Unknown<br>
pwcond.x 0000000000B5D7CB Unknown Unknown<br>
Unknown<br>
pwcond.x 000000000047E8C4 init_gper_.A 72<br>
init_gper.f90<br>
pwcond.x 000000000045CAB9 do_cond_.A 462<br>
do_cond.f90<br>
pwcond.x 000000000045388A MAIN__ 22<br>
condmain.f90<br>
pwcond.x 0000000000437FDC Unknown Unknown<br>
Unknown<br>
libc.so.6 00002B76A0D2CCDD Unknown Unknown<br>
Unknown<br>
pwcond.x 0000000000437ED9 Unknown Unknown<br>
Unknown<br>
--------------------------------------------------------------------------<br>
mpirun has exited due to process rank 1 with PID 17595 on<br>
node node327.cruncher exiting without calling "finalize". This may<br>
have caused other processes in the application to be<br>
terminated by signals sent by mpirun (as reported here).<br>
--------------------------------------------------------------------------<br>
<br>
The input for PWSCF:<br>
<br>
&control<br>
calculation='scf'<br>
restart_mode='from_scratch',<br>
pseudo_dir = '/scratch/fhgfs/vladislav/espresso-5.0.2/pseudo/',<br>
outdir='/scratch/fhgfs/vladislav/espresso-5.0.2/tmp/test/',<br>
prefix='al'<br>
/<br>
&system<br>
ibrav = 6,<br>
celldm(1) =5.3,<br>
celldm(3) =1.414,<br>
nat= 2,<br>
ntyp= 1,<br>
ecutwfc = 15.0,<br>
occupations='smearing',<br>
smearing='methfessel-paxton',<br>
degauss=0.01<br>
/<br>
&electrons<br>
conv_thr = 1.0e-8<br>
mixing_beta = 0.7<br>
/<br>
ATOMIC_SPECIES<br>
Al 26.98 Al.pz-vbc.UPF<br>
ATOMIC_POSITIONS<br>
Al 0. 0. 0.0<br>
Al 0.5 0.5 0.707<br>
K_POINTS (automatic)<br>
4 4 4 1 1 1<br>
<br>
<br>
The input for PWCOND:<br>
<br>
&inputcond<br>
outdir='/scratch/fhgfs/vladislav/espresso-5.0.2/tmp/test/',<br>
prefixl='al'<br>
prefixs='al'<br>
tran_file ='trans.al.Ef'<br>
ikind = 1<br>
iofspin = 1<br>
energy0 = 0.00d0<br>
denergy = -0.01d0<br>
ewind = 1.d0<br>
epsproj = 1.d-3<br>
delgep = 1.d-12<br>
cutplot = 3.d0<br>
nz1 = 22<br>
&<br>
2<br>
0.0 0.0 0.5<br>
0.5 0.5 0.5<br>
1<br>
<br>
************************************************************<br>
************************************************************<br>
<br>
The calculation stops after the first k-point and gives<br>
the aforementioned message. Has anyone encountered the same problem<br>
and possibly know how to solve it?<br>
<br>
Another issue is related to the use of a distributed network<br>
filesystem. The problem is a heavy workload noticed in our<br>
filesystem that leads to performance problems on our high-performance<br>
cluster. In particular, the meta servers report a very high work<br>
request. This isn't observed using any other code. Exemplarily,<br>
below there is the time report of PWCOND during one of such runs:<br>
<br>
PWCOND : 0h18m CPU 5h19m WALL<br>
<br>
init : 432.25s CPU 18519.07s WALL ( 1 calls)<br>
poten : 0.49s CPU 0.52s WALL ( 2 calls)<br>
local : 49.24s CPU 51.44s WALL ( 1 calls)<br>
<br>
scatter_forw : 598.01s CPU 599.04s WALL ( 2 calls)<br>
<br>
compbs : 12.88s CPU 12.89s WALL ( 1 calls)<br>
compbs_2 : 10.84s CPU 10.84s WALL ( 1 calls)<br>
<br>
Notice the large difference between the CPU and the WALL times<br>
for the init subroutine. This was observed during the parallel<br>
execution with different number of processors both for 5.0.1 and<br>
5.0.2 versions, and on different architectures using OPENMPI<br>
environment.<br>
<br>
I would very much appreciate any help with these matters.<br>
<br>
With kind regards,<br>
Vladislav Borisov<br>
<br>
<br>
Max Planck Institute of Microstructure Physics<br>
Weinberg 2, 06120, Halle (Saale), Germany<br>
Tel No: +49 345 5525448<br>
Fax No: +49 345 5525446<br>
Email: <a href="mailto:vborisov@mpi-halle.mpg.de">vborisov@mpi-halle.mpg.de</a><br>
<br>
_______________________________________________<br>
Pw_forum mailing list<br>
<a href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a><br>
<a href="http://pwscf.org/mailman/listinfo/pw_forum" target="_blank">http://pwscf.org/mailman/listinfo/pw_forum</a><br>
</blockquote></div><br></div>