[Pw_forum] Pw_forum Digest, Vol 27, Issue 74
wangqj1
wangqj1 at 126.com
Wed Sep 23 14:12:53 CEST 2009
在009-09-23 18:46:48,pw_forum-request at pwscf.org 写道:
>Send Pw_forum mailing list submissions to
> pw_forum at pwscf.org
>
>To subscribe or unsubscribe via the World Wide Web, visit
> http://www.democritos.it/mailman/listinfo/pw_forum
>or, via email, send a message with subject or body 'help' to
> pw_forum-request at pwscf.org
>
>You can reach the person managing the list at
> pw_forum-owner at pwscf.org
>
>When replying, please edit your Subject line so it is more specific
>than "Re: Contents of Pw_forum digest..."
>
>
>Today's Topics:
>
> 1. how to improve the calculation speed ? (wangqj1)
> 2. Re: how to improve the calculation speed ? (Giovanni Cantele)
> 3. Re: how to improve the calculation speed ? (Lorenzo Paulatto)
> 4. write occupancy (ali kazempour)
> 5. Re: write occupancy (Prasenjit Ghosh)
>
>
>----------------------------------------------------------------------
>
>Message: 1
>Date: Wed, 23 Sep 2009 16:05:46 +0800 (CST)
>From: wangqj1 <wangqj1 at 126.com>
>Subject: [Pw_forum] how to improve the calculation speed ?
>To: pw_forum <pw_forum at pwscf.org>
>Message-ID:
> <21870763.369701253693146938.JavaMail.coremail at bj126app103.126.com>
>Content-Type: text/plain; charset="gbk"
>
>
>Dear PWSCF users
> When I use R and G parallelization to run job ,it as if wait for the input . According peoples advice ,I use k-point parallelization ,it runs well . But it runs too slow .The information I can offerred as following:
>(1) : CUP usage of one node is as
>Tasks: 143 total, 10 running, 133 sleeping, 0 stopped, 0 zombie
>Cpu0 : 99.7%us, 0.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
>Cpu1 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
>Cpu2 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
>Cpu3 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
>Cpu4 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
>Cpu5 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
>Cpu6 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
>Cpu7 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
>Mem: 8044120k total, 6683720k used, 1360400k free, 1632k buffers
>Swap: 4192956k total, 2096476k used, 2096480k free, 1253712k cached
>
>(2) The input file of PBS
>#!/bin/sh
>#PBS -j oe
>#PBS -N pw
>#PBS -l nodes=1:ppn=8
>#PBS -q small
>cd $PBS_O_WORKDIR
>hostname
>/usr/local/bin/mpirun -np 8 -machinefile $PBS_NODEFILE /home/wang/bin/pw.x -npool 8 -in ZnO.pw.inp>ZnO.pw.out
>(3)
>wang at node22:~> netstat -s
>Ip:
> 1894215181 total packets received
> 0 forwarded
> 0 incoming packets discarded
> 1894215181 incoming packets delivered
> 979205769 requests sent out
> 30 fragments received ok
> 60 fragments created
>Icmp:
> 2 ICMP messages received
> 1 input ICMP message failed.
> ICMP input histogram:
> destination unreachable: 2
> 2 ICMP messages sent
> 0 ICMP messages failed
> ICMP output histogram:
> destination unreachable: 2
>IcmpMsg:
> InType3: 2
> OutType3: 2
>Tcp:
> 5662 active connections openings
> 9037 passive connection openings
> 68 failed connection attempts
> 1 connection resets received
> 18 connections established
> 1894049565 segments received
> 979043182 segments send out
> 284 segments retransmited
> 0 bad segments received.
> 55 resets sent
>Udp:
> 165614 packets received
> 0 packets to unknown port received.
> 0 packet receive errors
> 162301 packets sent
> RcvbufErrors: 0
> SndbufErrors: 0
>UdpLite:
> InDatagrams: 0
> NoPorts: 0
> InErrors: 0
> OutDatagrams: 0
> RcvbufErrors: 0
> SndbufErrors: 0
>TcpExt:
> 10 resets received for embryonic SYN_RECV sockets
> ArpFilter: 0
> 5691 TCP sockets finished time wait in fast timer
> 25 time wait sockets recycled by time stamp
> 17369935 delayed acks sent
> 1700 delayed acks further delayed because of locked socket
> 18 packets directly queued to recvmsg prequeue.
> 8140 packets directly received from backlog
> 1422037027 packets header predicted
> 7 packets header predicted and directly queued to user
> TCPPureAcks: 2794058
> TCPHPAcks: 517887764
> TCPRenoRecovery: 0
> TCPSackRecovery: 56
> TCPSACKReneging: 0
> TCPFACKReorder: 0
> TCPSACKReorder: 0
> TCPRenoReorder: 0
> TCPTSReorder: 0
> TCPFullUndo: 0
> TCPPartialUndo: 0
> TCPDSACKUndo: 0
> TCPLossUndo: 1
> TCPLoss: 357
> TCPLostRetransmit: 6
> TCPRenoFailures: 0
> TCPSackFailures: 0
> TCPLossFailures: 0
> TCPFastRetrans: 235
> TCPForwardRetrans: 46
> TCPSlowStartRetrans: 0
> TCPTimeouts: 3
> TCPRenoRecoveryFail: 0
> TCPSackRecoveryFail: 0
> TCPSchedulerFailed: 0
> TCPRcvCollapsed: 0
> TCPDSACKOldSent: 0
> TCPDSACKOfoSent: 0
> TCPDSACKRecv: 2
> TCPDSACKOfoRecv: 0
> TCPAbortOnSyn: 0
> TCPAbortOnData: 0
> TCPAbortOnClose: 0
> TCPAbortOnMemory: 0
> TCPAbortOnTimeout: 0
> TCPAbortOnLinger: 0
> TCPAbortFailed: 0
> TCPMemoryPressures: 0
> TCPSACKDiscard: 0
> TCPDSACKIgnoredOld: 1
> TCPDSACKIgnoredNoUndo: 0
> TCPSpuriousRTOs: 0
> TCPMD5NotFound: 0
> TCPMD5Unexpected: 0
>IpExt:
> InNoRoutes: 0
> InTruncatedPkts: 0
> InMcastPkts: 0
> OutMcastPkts: 0
> InBcastPkts: 0
> OutBcastPkts: 0
>when I install the PWSCF ,I only use the command line :
>./configure
>make all .
>And it install successful .
>
>I don't know why it run so slow ,how to solve this problem ? Any advice will be appreciated !
>
>Best regard
>Q . J. Wang
>XiangTan University
>
>
>
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: http://www.democritos.it/pipermail/pw_forum/attachments/20090923/f82797a3/attachment-0001.htm
>
>------------------------------
>
>Message: 2
>Date: Wed, 23 Sep 2009 10:45:51 +0200
>From: Giovanni Cantele <Giovanni.Cantele at na.infn.it>
>Subject: Re: [Pw_forum] how to improve the calculation speed ?
>To: PWSCF Forum <pw_forum at pwscf.org>
>Message-ID: <4AB9E03F.7080600 at na.infn.it>
>Content-Type: text/plain; charset=x-gbk; format=flowed
>
>wangqj1 wrote:
>>
>> Dear PWSCF users
>> When I use R and G parallelization to run job ,it as if wait for the
>> input .
>
>What does it mean? Does it print the output header or the output up to
>some point or nothing happens?
It only print the output heander and not have iteration .
>
>>?According?peoples?advice?,I?use?k-point?parallelization?,it?runs?well?
>>?.?But?it?runs?too?slow?.The?information?I?can?offerred?as?following:
>>?(1)?:?CUP?usage?of?one?node?is?as
>>?Tasks:?143?total,?10?running,?133?sleeping,?0?stopped,?0?zombie
>>?Cpu0?:?99.7%us,?0.3%sy,?0.0%ni,?0.0%id,?0.0%wa,?0.0%hi,?0.0%si,?0.0%st
>>?Cpu1?:100.0%us,?0.0%sy,?0.0%ni,?0.0%id,?0.0%wa,?0.0%hi,?0.0%si,?0.0%st
>>?Cpu2?:100.0%us,?0.0%sy,?0.0%ni,?0.0%id,?0.0%wa,?0.0%hi,?0.0%si,?0.0%st
>>?Cpu3?:100.0%us,?0.0%sy,?0.0%ni,?0.0%id,?0.0%wa,?0.0%hi,?0.0%si,?0.0%st
>>?Cpu4?:100.0%us,?0.0%sy,?0.0%ni,?0.0%id,?0.0%wa,?0.0%hi,?0.0%si,?0.0%st
>>?Cpu5?:100.0%us,?0.0%sy,?0.0%ni,?0.0%id,?0.0%wa,?0.0%hi,?0.0%si,?0.0%st
>>?Cpu6?:100.0%us,?0.0%sy,?0.0%ni,?0.0%id,?0.0%wa,?0.0%hi,?0.0%si,?0.0%st
>>?Cpu7?:100.0%us,?0.0%sy,?0.0%ni,?0.0%id,?0.0%wa,?0.0%hi,?0.0%si,?0.0%st
>>?Mem:?8044120k?total,?6683720k?used,?1360400k?free,?1632k?buffers
>>?Swap:?4192956k?total,?2096476k?used,?2096480k?free,?1253712k?cached
>I'm?not?very?expert?about?reading?such?information,?but?it?seams?that?
>your?node?is?making?swap,?maybe?because?the?job?is?requiring?too?much?
>memory?with?respect?to?the?available?one.?This?usually?induces?a?huge?
>performance?degradation.
>
>In?choosing?the?optimal?number?of?nodes,?processes?per?node,?etc.,?
>several?factors?should?be?taken?into?account:?memory?requirements,?
>communication?hardware,?etc.?You?might?want?have?a?look?to?this?page?
>from?the?user?guide:?http://www.quantum-espresso.org/user_guide/node33.html
8 processes per node in our cluster,
model name????? : Intel(R) Xeon(R) CPU??? E5410? @ 2.33GHz
stepping??????? : 10
cpu MHz???????? : 2327.489
cache size????? : 6144 KB
>Also,?consider?that,?at?least?for?not?very?very?recent?CPU?generation,?
>using too many cores per CPU (e.g. if your cluster configuration is with >quad-core processors), might not improve (maybe also make worse) the >code performances (this is also reported in previous threads in this >forum, you can make a search).> >Also this can be of interest for you: >http://www.quantum-espresso.org/wiki/index.php/Frequently_Asked_Questions#Why_is_my_parallel_job_running_in_such_a_lousy_way.3F
I am not the supperuser,I don't know how to Set the environment variable OPEN_MP_THREADS to 1,I can't find where is OPEN_MP_THREADS ?
>
>>?I?don't?know?why?it?run?so?slow?,how?to?solve?this?problem???Any?
>>?advice?will?be?appreciated?!
>Apart?from?better?suggestions?coming?from?more?expert?people,?it?would?
>be?important?to?see?what?kind?of?job?you?are?trying?to?run.?For?example:?
>did?you?start?directly?with?a?"production?run"?(many?k-points?and/or?
>large?unit?cells?and/or?large?cut-off)??Did?pw.x?ever?run?on?your?
>cluster?with?simple?jobs,?like?bulk?silicon?or?any?other?(see?the?
>examples?directory)?
The input file I had run on my single computer(4 CPUs).It runs well .
>
>Another?possibility?would?be?starting?with?the?serial?executable?
>(disabling?parallel?at?configure?time)?and?then?switch?to?parallel?once?
>you?check?that?everything?is?working?OK.
>
>
>
>Unfortunately,?in?many?cases?the?computation?requires?lot?of?work?to?
>correctly?set-up?and?optimize?compilation,?performances,?etc.?(not?to?
>speak?about?results?convergence?issues!!!!).
>The?only?way?is?trying?to?isolate?problems?and?solve?one?by?one.?Yet,?I?
>would?say?that?in?this?respect?quantum-espresso?is?one?of?the?best?
>choices,?being?the?code?made?to?properly?work?in?as?many?cases?as?
>possible,?rather?then?implementing?all?the?human?knowledge?but?just?for?
>those?who?wrote?it!!!
>;-)
>
>Good?luck,
>
>Giovanni
>
>
>--?
>
>
>
>Dr.?Giovanni?Cantele
>Coherentia?CNR-INFM?and?Dipartimento?di?Scienze?Fisiche
>Universita'?di?Napoli?"Federico?II"
>Complesso?Universitario?di?Monte?S.?Angelo?-?Ed.?6
>Via?Cintia,?I-80126,?Napoli,?Italy
>Phone:?+39?081?676910
>Fax:???+39?081?676346
>E-mail:?giovanni.cantele at cnr.it
>????????giovanni.cantele at na.infn.it
>Web:?http://people.na.infn.it/~cantele
>Research?Group:?http://www.nanomat.unina.it
>Skype?contact:?giocan74
>
>
>
>------------------------------
>
>Message:?3
>Date:?Wed,?23?Sep?2009?10:50:48?+0200
>From:?"Lorenzo?Paulatto"?<paulatto at sissa.it>
>Subject:?Re:?[Pw_forum]?how?to?improve?the?calculation?speed??
>To:?Giovanni.Cantele at na.infn.it,?"PWSCF?Forum"?<pw_forum at pwscf.org>
>Message-ID:?<op.u0pb6yqfa8x26q at paulax>
>Content-Type:?text/plain;?charset=utf-8;?format=flowed;?delsp=yes
>
>In?data?23?settembre?2009?alle?ore?10:45:51,?Giovanni?Cantele??
><Giovanni.Cantele at na.infn.it>?ha?scritto:
>>?I'm?not?very?expert?about?reading?such?information,?but?it?seams?that
>>?your?node?is?making?swap,?maybe?because?the?job?is?requiring?too?much
>>?memory?with?respect?to?the?available?one.?This?usually?induces?a?huge
>>?performance?degradation.
>
>If?this?is?the?case,?reducing?the?number?of?pools?will?reduce?the?amount??
>of?memory?required?per?node.
>
>cheers
>
>
>--?
>Lorenzo?Paulatto
>SISSA??&??DEMOCRITOS?(Trieste)
>phone:?+39?040?3787?511
>skype:?paulatz
>www:???http://people.sissa.it/~paulatto/
>
>?????***?save?italian?brains?***
>??http://saveitalianbrains.wordpress.com/
>
>
>------------------------------
>
>Message:?4
>Date:?Wed,?23?Sep?2009?03:13:18?-0700?(PDT)
>From:?ali?kazempour?<kazempoor2000 at yahoo.com>
>Subject:?[Pw_forum]?write?occupancy
>To:?pw?<pw_forum at pwscf.org>
>Message-ID:?<432077.46189.qm at web112513.mail.gq1.yahoo.com>
>Content-Type:?text/plain;?charset="us-ascii"
>
>Hi
>How?do?we?can?force?the?code?to?print?the?occupancy?in?simple?scf?run?
>I?know?partial?dos?calculation?,?but?I?don't?know?wheather?another?way?also?exist?or?not?
>thanks?a?lot
>
>?Ali?Kazempour
>Physics?department,?Isfahan?University?of?Technology
>84156?Isfahan,?Iran.????????????Tel-1:??+98?311?391?3733
>Fax:?+98?311?391?2376??????Tel-2:??+98?311?391?2375
>
>
>
>??????
>--------------?next?part?--------------
>An?HTML?attachment?was?scrubbed...
>URL:?http://www.democritos.it/pipermail/pw_forum/attachments/20090923/97a9ee58/attachment-0001.htm?
>
>------------------------------
>
>Message:?5
>Date:?Wed,?23?Sep?2009?12:46:45?+0200
>From:?Prasenjit?Ghosh?<prasenjit.jnc at gmail.com>
>Subject:?Re:?[Pw_forum]?write?occupancy
>To:?PWSCF?Forum?<pw_forum at pwscf.org>
>Message-ID:
> <627e0ffa0909230346wbdf3399i1b2a48f4edfa9c65 at mail.gmail.com>
>Content-Type:?text/plain;?charset="iso-8859-1"
>
>use?verbosity='high'
>
>Prasenjit.
>
>2009/9/23?ali?kazempour?<kazempoor2000 at yahoo.com>
>
>>?Hi
>>?How?do?we?can?force?the?code?to?print?the?occupancy?in?simple?scf?run?
>>?I?know?partial?dos?calculation?,?but?I?don't?know?wheather?another?way?also
>>?exist?or?not?
>>?thanks?a?lot
>>
>>?Ali?Kazempour
>>?Physics?department,?Isfahan?University?of?Technology
>>?84156?Isfahan,?Iran.?Tel-1:?+98?311?391?3733
>>?Fax:?+98?311?391?2376?Tel-2:?+98?311?391?2375
>>
>>
>>?_______________________________________________
>>?Pw_forum?mailing?list
>>?Pw_forum at pwscf.org
>>?http://www.democritos.it/mailman/listinfo/pw_forum
>>
>>
>
>
>--?
>PRASENJIT?GHOSH,
>POST-DOC,
>ROOM?NO:?265,?MAIN?BUILDING,
>CM?SECTION,?ICTP,
>STRADA?COSTERIA?11,
>TRIESTE,?34104,
>ITALY
>PHONE:?+39?040?2240?369?(O)
>?????????????+39?3807528672?(M)
>--------------?next?part?--------------
>An?HTML?attachment?was?scrubbed...
>URL:?http://www.democritos.it/pipermail/pw_forum/attachments/20090923/bc129707/attachment.htm?
>
>------------------------------
>
>_______________________________________________
>Pw_forum?mailing?list
>Pw_forum at pwscf.org
>http://www.democritos.it/mailman/listinfo/pw_forum
>
>
>End?of?Pw_forum?Digest,?Vol?27,?Issue?74
>****************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20090923/1a72ca25/attachment.html>
More information about the users
mailing list