[Pw_forum] how to improve the calculation speed ?
wangqj1
wangqj1 at 126.com
Wed Sep 23 10:05:46 CEST 2009
Dear PWSCF users
When I use R and G parallelization to run job ,it as if wait for the input . According peoples advice ,I use k-point parallelization ,it runs well . But it runs too slow .The information I can offerred as following:
(1) : CUP usage of one node is as
Tasks: 143 total, 10 running, 133 sleeping, 0 stopped, 0 zombie
Cpu0 : 99.7%us, 0.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu5 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8044120k total, 6683720k used, 1360400k free, 1632k buffers
Swap: 4192956k total, 2096476k used, 2096480k free, 1253712k cached
(2) The input file of PBS
#!/bin/sh
#PBS -j oe
#PBS -N pw
#PBS -l nodes=1:ppn=8
#PBS -q small
cd $PBS_O_WORKDIR
hostname
/usr/local/bin/mpirun -np 8 -machinefile $PBS_NODEFILE /home/wang/bin/pw.x -npool 8 -in ZnO.pw.inp>ZnO.pw.out
(3)
wang at node22:~> netstat -s
Ip:
1894215181 total packets received
0 forwarded
0 incoming packets discarded
1894215181 incoming packets delivered
979205769 requests sent out
30 fragments received ok
60 fragments created
Icmp:
2 ICMP messages received
1 input ICMP message failed.
ICMP input histogram:
destination unreachable: 2
2 ICMP messages sent
0 ICMP messages failed
ICMP output histogram:
destination unreachable: 2
IcmpMsg:
InType3: 2
OutType3: 2
Tcp:
5662 active connections openings
9037 passive connection openings
68 failed connection attempts
1 connection resets received
18 connections established
1894049565 segments received
979043182 segments send out
284 segments retransmited
0 bad segments received.
55 resets sent
Udp:
165614 packets received
0 packets to unknown port received.
0 packet receive errors
162301 packets sent
RcvbufErrors: 0
SndbufErrors: 0
UdpLite:
InDatagrams: 0
NoPorts: 0
InErrors: 0
OutDatagrams: 0
RcvbufErrors: 0
SndbufErrors: 0
TcpExt:
10 resets received for embryonic SYN_RECV sockets
ArpFilter: 0
5691 TCP sockets finished time wait in fast timer
25 time wait sockets recycled by time stamp
17369935 delayed acks sent
1700 delayed acks further delayed because of locked socket
18 packets directly queued to recvmsg prequeue.
8140 packets directly received from backlog
1422037027 packets header predicted
7 packets header predicted and directly queued to user
TCPPureAcks: 2794058
TCPHPAcks: 517887764
TCPRenoRecovery: 0
TCPSackRecovery: 56
TCPSACKReneging: 0
TCPFACKReorder: 0
TCPSACKReorder: 0
TCPRenoReorder: 0
TCPTSReorder: 0
TCPFullUndo: 0
TCPPartialUndo: 0
TCPDSACKUndo: 0
TCPLossUndo: 1
TCPLoss: 357
TCPLostRetransmit: 6
TCPRenoFailures: 0
TCPSackFailures: 0
TCPLossFailures: 0
TCPFastRetrans: 235
TCPForwardRetrans: 46
TCPSlowStartRetrans: 0
TCPTimeouts: 3
TCPRenoRecoveryFail: 0
TCPSackRecoveryFail: 0
TCPSchedulerFailed: 0
TCPRcvCollapsed: 0
TCPDSACKOldSent: 0
TCPDSACKOfoSent: 0
TCPDSACKRecv: 2
TCPDSACKOfoRecv: 0
TCPAbortOnSyn: 0
TCPAbortOnData: 0
TCPAbortOnClose: 0
TCPAbortOnMemory: 0
TCPAbortOnTimeout: 0
TCPAbortOnLinger: 0
TCPAbortFailed: 0
TCPMemoryPressures: 0
TCPSACKDiscard: 0
TCPDSACKIgnoredOld: 1
TCPDSACKIgnoredNoUndo: 0
TCPSpuriousRTOs: 0
TCPMD5NotFound: 0
TCPMD5Unexpected: 0
IpExt:
InNoRoutes: 0
InTruncatedPkts: 0
InMcastPkts: 0
OutMcastPkts: 0
InBcastPkts: 0
OutBcastPkts: 0
when I install the PWSCF ,I only use the command line :
./configure
make all .
And it install successful .
I don't know why it run so slow ,how to solve this problem ? Any advice will be appreciated !
Best regard
Q . J. Wang
XiangTan University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20090923/f82797a3/attachment.html>
More information about the users
mailing list