[Pw_forum] questions about intel CPU and vc-relax using bfgs cell optimization
vega lew
vegalew at hotmail.com
Thu Jun 26 07:25:35 CEST 2008
Dear Axel,
First, I wanna express my acknowledgement for your kindly responding. From it I learned a lot.
I typed 'uname -a', the information 'Linux node5 2.6.9-42.ELsmp #1 SMP Wed Jul 12 23:32:02 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux' displayed. So, it's a x86_64 OS.
> VL> Why I updated my MKL and compilers brings me less efficiency?
>
> that has most likely other reasons (runaway processes?, other users?)
I'm sure no other users, I'm the admin of that cluster.
Have your tested the intel C++ and Fortran 10.1.017 Compilers for intel 64 vision and MKL 10.0.3.020? Do you think it's really suitable for my clusters and QE? Do you think it's better than 10.1.008 compilers and 10.0.011 MKL?
> VL> At last, I compile the QE using amd64 architecture schedule by intel
> VL> C++ and Fortran 10.1.017 vision and MKL 10.0.3.020 library, but I
> VL> find it less efficienct the the QE compiled by intel C++ and Fortran
> VL> 10.1.008 vision and 10.0.011 library. The efficiency of QE compiled
> VL> by 10.1.008 compiler and 10.0.011 is about 60% but the QE compiled
> VL> by 10.1.017compiler is 10% tested by input file like this:
>
> how do you determine this 'efficiency'? how do you run your job?
I run the inputfile by mpi command 'mpiexec -n 20 pw.x < inputfile > outputfile', the job started immediately. The input file is like this:
&CONTROL
title = 'Anatase lattice BFGS' ,
calculation = 'vc-relax' ,
restart_mode = 'from_scratch' ,
outdir = '/home/vega/tmp/' ,
pseudo_dir = '/home/vega/espresso-4.0/pseudo/' ,
prefix = 'Anatase lattice default' ,
etot_conv_thr = 0.000000735 ,
forc_conv_thr = 0.0011668141375 ,
nstep = 1000 ,
/
&SYSTEM
ibrav = 6,
celldm(1) = 7.135605333,
celldm(3) = 2.5121822033898305084745762711864,
nat = 12,
ntyp = 2,
ecutwfc = 25 ,
ecutrho = 200 ,
/
&ELECTRONS
conv_thr = 7.3D-8 ,
/
&IONS
ion_dynamics = 'bfgs' ,
/
&CELL
cell_dynamics = 'bfgs' ,
cell_dofree = 'xyz' ,
/
ATOMIC_SPECIES
Ti 47.86700 Ti.pw91-sp-van_ak.UPF
O 15.99940 O.pw91-van_ak.UPF
ATOMIC_POSITIONS angstrom
Ti 0.000000000 0.000000000 0.000000000
Ti 1.888000000 1.888000000 4.743000000
Ti 0.000000000 1.888000000 2.372000000
Ti 1.888000000 0.000000000 7.115000000
O 0.000000000 0.000000000 1.973000000
O 1.888000000 1.888000000 6.716000000
O 0.000000000 1.888000000 4.345000000
O 1.888000000 0.000000000 9.088000000
O 1.888000000 0.000000000 5.141000000
O 0.000000000 1.888000000 0.398000000
O 1.888000000 1.888000000 2.770000000
O 0.000000000 0.000000000 7.513000000
K_POINTS automatic
7 7 3 1 1 1
And I see 4 process on each node by 'top' command. So I think each core has a process.
Is there anything I misunderstanding?
I think I should also use the mpi command like this 'mpiexec -n 20 pw.x -npool 5 < inputfile > outputfile'
I has tried it. But no obvious improvement, the usage of the CPU count by 'sar' command still about 60%, 7% for the system, and the rest is idle.
> it is much better to compiler fftw with the native gcc compiler.
> but since QE actually contains FFTW there is no need to install
> or compile it.
QE contains FFTW? Where is it? Does it should be detected by the QE configure?
if I compile the FFTW by gcc or don't compile FFTW, QE configure can't detect it and show the information
about the FFTW in QE configure process even I have put the dirs to the environment variable. So, I think
only the fftw compiled by ifort can be found by my QE.
Do your think intel MKL is needed? When I configured the QE without intel MKL, the QE configure also could find
the BLAS and LAPACK under itself folder.
You mentioned OMP_NUM_THREADS again. I'm sorry I know little about it.
Should I use the export command like 'export OMP_NUM_THREADS=1'?
If the command is enough, could you please tell me, when I should type it? before configuring the QE?
or before using the mpiexec command?
And could you give me some hints about the optimization of Anatase lattice using BFGS schedule?
Why the 'cell_dofree = 'xyz'' in &CELL section take no effect to aviod lattice angles changing,
and result in a 'not orthogonal operation' error?
Do you think, I shloud never using BFGS to optimize Anatase lattice?
thank your again for your so detail responding.
Vega Lew
PH.D Candidate in Chemical Engineering
College of Chemistry and Chemical Engineering
Nanjing University of Technology, 210009, Nanjing, Jiangsu, China
> Date: Wed, 25 Jun 2008 23:50:38 -0400
> From: akohlmey at cmm.chem.upenn.edu
> To: vegalew at hotmail.com
> CC: pw_forum at pwscf.org
> Subject: Re: [Pw_forum] questions about intel CPU and vc-relax using bfgs cell optimization
>
> On Thu, 26 Jun 2008, vega lew wrote:
>
> VL>
> VL> Dear all,
> VL>
>
> VL> I built a cluster of 5 computers with intel Core TM 2 Q6600 CPU
> VL> (quadcore), and 40G memory total (8G each) on S3000AH system board.
> VL> The network is 1Gbit Ethernet. I also checked the em64t option in
> VL> BIOS is on, so I think Q6600 is a cpu using em64t technology. For
> VL> more information about my CPU, see
>
> more important is to determine whether you installed a 32-bit or
> a 64-bit version of the OS. you can find that out with 'uname -a'.
> for 32-bit you get (amongst others) i386 and i686 whereas for
> 64-bit you get x86_64. regardless of bios options or what cpuinfo
> shows, the cpu can handle both.
>
> [...]
>
> VL> Therefore, I updated my intel C++ and Fortran Compilers from
> VL> 10.1.008 to latest vision 10.1.017 for Intel(R) 64 and MKL from
> VL> 10.0.011 to latest 10.0.3.020, file names displayed on website were
> VL> l_cc_p_10.1.017_intel64.tar.gz, l_cc_p_10.1.017_intel64.tar.gz and
> VL> l_mkl_p_10.0.3.020.tgz. After installation of the three, I compiled
> VL> for em64t vision of blas95 lapack95 in
>
> those are not needed.
>
> VL> /opt/intel/mkl/10.0.3.020/interfaces/ using ifort under
> VL> /opt/intel/fce/10.1.017/bin/. Then compiled mpich2 using ifort and
> VL> icc. But when I compile fftw 2.1.5 an error occurred, so I compile
>
> it is much better to compiler fftw with the native gcc compiler.
> but since QE actually contains FFTW there is no need to install
> or compile it.
>
> VL> the fftw 2.1.5 using 10.1.008 ifort and icc on other node with same
> VL> hardware, the scp it to master node. After all above done, I turned
> VL> to compile QE.
>
> VL> But to my surprise, QE detected my architecture as amd64, not ia32
> VL> or ia64. My first question is does QE support the intel EM64T
> VL> technology and take advantages from it ?
>
> it is neither ia32 nor ia64. amd actually invented this 64-bit mode
> and then intel named it EM64t (to avoid having to call it amd64).
> the official linux architecture is x86_64.
>
> VL> At last, I compile the QE using amd64 architecture schedule by intel
> VL> C++ and Fortran 10.1.017 vision and MKL 10.0.3.020 library, but I
> VL> find it less efficienct the the QE compiled by intel C++ and Fortran
> VL> 10.1.008 vision and 10.0.011 library. The efficiency of QE compiled
> VL> by 10.1.008 compiler and 10.0.011 is about 60% but the QE compiled
> VL> by 10.1.017compiler is 10% tested by input file like this:
>
> how do you determine this 'efficiency'? how do you run your job?
>
> since MKL will automatically multi-thread across all cores,
> you have to set the environment variable OMP_NUM_THREADS to 1
> or else you'll be oversubscribing each cpu 4x. secondly, for
> efficient operation across gigabit ethernet (which is quite slow
> and has very high latencies), you have to parallelize across
> k-point pools, at least between nodes or else your performance
> will be horrible. if you have not taken care of mkl multithreading
> all performance data will be bogus.
>
>
> VL> My second question is about the efficiency:
> VL> Which compiler and MKL vision is the best one for my cluster?
>
> the one that runs corectly. the performance difference between
> a different optimized implementations of BLAS are on average
> of the order of 10% of the total time. compiler impact (e.g.
> between g95/gfortran and intel 10) is of the same order.
>
> VL> Why I updated my MKL and compilers brings me less efficiency?
>
> that has most likely other reasons (runaway processes?, other users?)
>
> VL> What is the best efficiency of my cluster can reach ? 60% is low or high for QE?
>
> you should be able to do better, if you run your job the right way.
> please check the documentation on how to run QE properly in parallel.
>
> to determine the performance baseline, you should first run a test
> with only one MPI task and set OMP_NUM_THREADS to 1 (read the intel
> docs about this).
>
> cheers,
> axel.
>
>
> --
> =======================================================================
> Axel Kohlmeyer akohlmey at cmm.chem.upenn.edu http://www.cmm.upenn.edu
> Center for Molecular Modeling -- University of Pennsylvania
> Department of Chemistry, 231 S.34th Street, Philadelphia, PA 19104-6323
> tel: 1-215-898-1582, fax: 1-215-573-6233, office-tel: 1-215-898-5425
> =======================================================================
> If you make something idiot-proof, the universe creates a better idiot.
_________________________________________________________________
Connect to the next generation of MSN Messenger
http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20080626/3ddefe21/attachment.html>
More information about the users
mailing list