[Pw_forum] Compiling with intel fortran, MKL and MKL FFT wrappers

vega vegalew at hotmail.com
Thu Oct 2 08:29:16 CEST 2008


Thank you Serge

vega
=================================================================================
Vega Lew (weijia liu)
PH.D Candidate in Chemical Engineering
State Key Laboratory of Materials-oriented Chemical Engineering
College of Chemistry and Chemical Engineering
Nanjing University of Technology, 210009, Nanjing, Jiangsu, China

--------------------------------------------------
From: "Serge Nakhmanson" <nakhmanson at anl.gov>
Sent: Wednesday, October 01, 2008 2:18 AM
To: "PWSCF Forum" <pw_forum at pwscf.org>
Subject: Re: [Pw_forum] Compiling with intel fortran,MKL and MKL FFT 
wrappers

> Lorenzo, Axel,
>
> Thank you for your comments. I expected the serial lib
> to be "it" but decided to check in with the forum just
> to be sure. I have not tested the new "MKL FTT" binary
> extensively yet but I will. As I understand, I should not
> expect a tremendous performance improvement over plain
> vanilla fftw2 lib, however, if I see something interesting
> I will drop a line here.
>
>
> What is given below is a comment to answer Vega's questions.
> It could be skipped unless one's interested in the nitty-gritty
> details and/or wants to find and expose flaws in my operating
> procedures.
>
> > could you tell me which dir should be added to the LD_LIBRARY_PATH, or 
> > which
> > flags should be added after the QE configure command?
>
> Vega,
>
> Here's what you can do, if, against Axel's advice, you
> just gotta have an MKL FFT binary in your {espresso}/bin.
>
> I usually do not use configure. Instead I edit make.sys
> directly, which seems to be much more convenient. If you
> take a look at the end of my original message you will
> see how I set all the important flags (including include
> and lib paths) to compile everything without problems.
> The only thing you have to do is put in your own paths
> to MKL instead of my paths.
>
> Regarding the FFT_LIBS flag, by default, my sysadmin placed
> all the compiled MKL FFT libs into {mkl}/{number}/lib/64
> but you can place them anywhere as long as you provide a
> correct path to them in make.sys.
>
> Regarding the LD_LIBRARY_PATH, I usually set it inside my
> PBS job script, e.g.,
>
> #
> # Set up some compiler-dependent things:
> #
> if [ $COMPILER = intel ]; then
>
>   # Set up some things to run an Intel-compiler binary:
>   MPIRUN=/usr/local/bin/mpiexec
>   MPIFLAGS='-verbose -kill'
>   export
> LD_LIBRARY_PATH=/usr/local/intel/mkl/10.0.4.023/lib/em64t/:$LD_LIBRARY_PATH
>
> elif [ $COMPILER = pgi64 ]; then
>
>   {do something else here}
>
> else
>
>   echo "Compiler $COMPILER unsupported for parallel execution"
>   exit 1
>
> fi
>
> You can then check if all the appropriate libs are found correctly
> by doing "ldd your_binary.x" (in the same PBS script, of course)
> before you run it:
>
> # Run the job in /<local scratch>/<run>:
>
> DATE=`date`
> echo ""
> echo "Time at the start of the job is: $DATE"
> echo ""
>
> cd $SCRATCH/$RUNDIR
>
> echo ""
> echo "ldd pw.x:"
> ldd $SCRATCH/$RUNDIR/pw.$ARCH.x
> echo ""
>
> echo ""
> echo "Running the job:"
> echo "$MPIRUN $MPIFLAGS $SCRATCH/$RUNDIR/pw.$ARCH.x $NPOOL <
> $SCRATCH/$RUNDIR/$NAME.$PREFIX.pw.inp > 
> $SCRATCH/$RUNDIR/$NAME.$PREFIX.pw.out"
> echo ""
> $MPIRUN $MPIFLAGS $SCRATCH/$RUNDIR/pw.$ARCH.x $NPOOL <
> $SCRATCH/$RUNDIR/$NAME.$PREFIX.pw.inp > 
> $SCRATCH/$RUNDIR/$NAME.$PREFIX.pw.out
>
> DATE=`date`
> echo ""
> echo "Time at the end of the job is: $DATE"
> echo "Doing some housekeeping duties..."
> echo ""
>
>
> Here's a sample output from this part of the script that I get on my 
> cluster,
> which shows that at least the MKL libraries from BLAS_LIBS and LAPACK_LIBS
> are found correctly:
>
> > Time at the start of the job is: Mon Sep 29 17:37:18 CDT 2008
> >
> >
> > ldd pw.x:
> >         libmkl_intel_lp64.so =>
> /usr/local/intel/mkl/10.0.4.023/lib/em64t/libmkl_intel_lp64.so 
> (0x0000002a95557000)
> >         libmkl_sequential.so =>
> /usr/local/intel/mkl/10.0.4.023/lib/em64t/libmkl_sequential.so 
> (0x0000002a9586d000)
> >         libmkl_core.so =>
> /usr/local/intel/mkl/10.0.4.023/lib/em64t/libmkl_core.so 
> (0x0000002a95a1a000)
> >         libpthread.so.0 => /lib64/tls/libpthread.so.0 
> > (0x000000375f300000)
> >         librt.so.1 => /lib64/tls/librt.so.1 (0x000000375e900000)
> >         libm.so.6 => /lib64/tls/libm.so.6 (0x000000375e300000)
> >         libc.so.6 => /lib64/tls/libc.so.6 (0x000000375de00000)
> >         libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x000000375eb00000)
> >         libdl.so.2 => /lib64/libdl.so.2 (0x000000375e100000)
> >         /lib64/ld-linux-x86-64.so.2 (0x000000375dc00000)
> >
> >
> > Running the job:
> > /usr/local/bin/mpiexec -verbose -kill 
> > /scratch/nakhmanson/PWtest3/pw.test.x
> -npool 2 < /scratch/nakhmanson/PWtest3/C1c.relaxed.pw.inp > 
> /scratch/nakhmanson/PWt
> > est3/C1c.relaxed.pw.out
> >
> > mpiexec: resolve_exe: using absolute path
> "/scratch/nakhmanson/PWtest3/pw.test.x".
> > mpiexec: process_start_event: evt 2 task 0 on node09.cluster.
> > mpiexec: read_p4_master_port: waiting for port from master.
> > mpiexec: read_p4_master_port: got port 33007.
> > mpiexec: process_start_event: evt 4 task 1 on node09.cluster.
> > mpiexec: All 2 tasks (spawn 0) started.
> > mpiexec: wait_tasks: waiting for node09.cluster node09.cluster.
> > mpiexec: process_obit_event: evt 3 task 0 on node09.cluster stat 0.
> > mpiexec: process_obit_event: evt 5 task 1 on node09.cluster stat 0.
> >
> > Time at the end of the job is: Mon Sep 29 17:41:03 CDT 2008
> > Doing some housekeeping duties...
>
> Hope this helps,
>
> Serge
>
> -- 
> *********************************************************
>  Serge M. Nakhmanson               phone: (630) 252-5205
>  Assistant Scientist                 fax: (630) 252-4798
>  MSD-212, Rm. C-224
>  Argonne National Laboratory
>  9700 S. Cass Ave.
>  Argonne, IL 60439
> *********************************************************
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
> 



More information about the users mailing list