[Pw_forum] Pw_forum Digest, Vol 109, Issue 25

al ahmed hamzaallal at gmail.com
Wed Aug 31 13:14:40 CEST 2016


Download the CUDA

On Thu, Aug 25, 2016 at 11:00 AM, <pw_forum-request at pwscf.org> wrote:

> Send Pw_forum mailing list submissions to
>         pw_forum at pwscf.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://pwscf.org/mailman/listinfo/pw_forum
> or, via email, send a message with subject or body 'help' to
>         pw_forum-request at pwscf.org
>
> You can reach the person managing the list at
>         pw_forum-owner at pwscf.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Pw_forum digest..."
>
>
> Today's Topics:
>
>    1. controlling output (Murray Daw)
>    2. Re: error messege igcx (Paolo Giannozzi)
>    3. Re: controlling output (Paolo Giannozzi)
>    4. Tutorial: Install QE-GPU binaries in latest ubuntu systems
>       (v. 16.04), Nvidia Cuda 7.5 , Intel MKL and Intel MPI software
>       with NVIDIA cards hardware (Josue Itsman Clavijo Penagos)
>    5. Re: Tutorial: Install QE-GPU binaries in latest ubuntu
>       systems (v. 16.04), Nvidia Cuda 7.5 , Intel MKL and Intel MPI
>       software with NVIDIA cards hardware (Rolly Ng)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 24 Aug 2016 13:39:18 +0000
> From: Murray Daw <daw at clemson.edu>
> Subject: [Pw_forum] controlling output
> To: "pw_forum at pwscf.org" <pw_forum at pwscf.org>
> Message-ID: <BA67C13A-155E-4B92-B1A4-50F11A8C370F at clemson.edu>
> Content-Type: text/plain; charset="utf-8"
>
> I am writing a driver that will call the PWSCF functions numerous times
> (~10,000).
> It is producing more output than I need.
> Is there any way to reduce the amount of output?
> I have verbosity set to ?low? already.
> I don?t want to turn it off completely, just lower it.
>
> Thanks.
>
> Best,
> Murray Daw
>
> ____________________________________________________
> MURRAY S. DAW
> R. A. Bowen Professor of Physics
> Dept of Physics & Astronomy
> Clemson University
> 202A Kinard Labs
> Clemson, SC 29634-0978
> Phone: (864)656-6702
> FAX: (864)656-0805
> e-mail: daw at clemson.edu
> website http://myweb.clemson.edu/~daw
>
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 24 Aug 2016 15:44:37 +0200
> From: Paolo Giannozzi <p.giannozzi at gmail.com>
> Subject: Re: [Pw_forum] error messege igcx
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
>         <CAPMgbCsSaA5yFZVKeJ7yZRh+m6kGV6UXS3AFMKn=1Pcw4jHkBw@
> mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> If you are using option "input_dft", check that he value you provided is
> correct. If not: check the DFT label in the header of pseudopotential files
> (for UPF files, search for 'Exchange_Correlation')
>
> Paolo
>
> On Wed, Aug 24, 2016 at 11:49 AM, mohammadreza hosseini <
> mhr.hosseini at modares.ac.ir> wrote:
>
> > Dear all
> >
> > I am studding electronic and magnetic properties of SnO2 clusters. I have
> > performed relax calculations and now i am doing SCF job using espresso
> > v.5.1. In the scf calculations i get this error :
> >
> >
> >      Error in routine set_dft_from_name (1):
> >       conflicting values for igcx
> >
> > I have searched different webs in particular espresso forum.
> > Is it possible for you to help me what i should do?
> >
> >
> >
> > _______________________________________________
> > Pw_forum mailing list
> > Pw_forum at pwscf.org
> > http://pwscf.org/mailman/listinfo/pw_forum
> >
>
>
>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://pwscf.org/pipermail/pw_forum/attachments/20160824/
> da3d53f9/attachment-0001.html
>
> ------------------------------
>
> Message: 3
> Date: Wed, 24 Aug 2016 15:45:39 +0200
> From: Paolo Giannozzi <p.giannozzi at gmail.com>
> Subject: Re: [Pw_forum] controlling output
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
>         <CAPMgbCuBQ9w=ZKCE3WauypSQ2bAXcjy91q-umVGBRxyfkVFXgA at mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> See variable "printout" in electrons.f90. Paolo
>
> On Wed, Aug 24, 2016 at 3:39 PM, Murray Daw <daw at clemson.edu> wrote:
>
> > I am writing a driver that will call the PWSCF functions numerous times
> > (~10,000).
> > It is producing more output than I need.
> > Is there any way to reduce the amount of output?
> > I have verbosity set to ?low? already.
> > I don?t want to turn it off completely, just lower it.
> >
> > Thanks.
> >
> > Best,
> > Murray Daw
> >
> > ____________________________________________________
> > MURRAY S. DAW
> > R. A. Bowen Professor of Physics
> > Dept of Physics & Astronomy
> > Clemson University
> > 202A Kinard Labs
> > Clemson, SC 29634-0978
> > Phone: (864)656-6702
> > FAX: (864)656-0805
> > e-mail: daw at clemson.edu
> > website http://myweb.clemson.edu/~daw
> >
> >
> >
> > _______________________________________________
> > Pw_forum mailing list
> > Pw_forum at pwscf.org
> > http://pwscf.org/mailman/listinfo/pw_forum
>
>
>
>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://pwscf.org/pipermail/pw_forum/attachments/20160824/
> 6ed27313/attachment-0001.html
>
> ------------------------------
>
> Message: 4
> Date: Wed, 24 Aug 2016 15:21:10 -0500
> From: Josue Itsman Clavijo Penagos <jiclavijop at unal.edu.co>
> Subject: [Pw_forum] Tutorial: Install QE-GPU binaries in latest ubuntu
>         systems (v. 16.04), Nvidia Cuda 7.5 , Intel MKL and Intel MPI
> software
>         with NVIDIA cards hardware
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
>         <CAJgMqa+H+09SSrJcHU1wTcQWzn_Sk0Jj9Mm+pGR-+ZOYUqk+ww at mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear all,
>
> I?m posting the following rather introductory, not professional-level
> tutorial for Installing QE-GPU binaries In recent ubuntu systems using
> Nvidia Cuda, Intel MKL, Intel MPI software and NVIDIA GPU kepler-model
> cards hardware, since i think this might be useful for any of you fellow
> scientists struggling to get working their quantum espresso GPU-enabled
> installations.
>
> I, by no means, do pretend to offer an in-depth description of every and
> all given steps, since i?m no a technology expert; in fact, much of this
> are referred to mail discussions, Urls and documentation readings found out
> inside the same used installation packages.
>
> *The only purpose of this post* is to give some useful advice and, mainly,
> to unify and share to whom may be interested all the available information
> I?ve found in order to get, to the present extent of my technology skills,
> a more-or-less complete and comprehensive tutorial containing all what it?s
> needed to get success about QE-GPU installation and usage.
>
> A big *thank you* to the following scholars for all of your highly valuable
> advice and key assistance to get this done:
>
>
> *Ari Paavo Seitsonen*
> *Claudio Quarti*
> and all of you
> *PW_FORUM FELLOWS*
>
>
> All trademarks, copyrights and author ownerships over texts, codes and
> original information links *are respected and are the exclusive property of
> their rightfully legitimate owners*.
>
> I apologize for any typos or vocabulary/redaction errors; I?m not a native
> english speaker.
>
>
> All said above, in the attached file I share what I did to get a flawless
> compilation.
>
> *Best regards, *
>
> Josu? Clavijo, Dr. Sc. in Chemistry
> Assistant Professor
> Universidad Nacional de Colombia
> Science College
> Chemistry Department
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://pwscf.org/pipermail/pw_forum/attachments/20160824/
> e461b163/attachment-0001.html
> -------------- next part --------------
>
> I?m posting the following rather introductory, not professional-level
> tutorial for Installing QE-GPU binaries In ubuntu systems using Nvidia
> Cuda, Intel MKL, Intel MPI software and NVIDIA cards hardware since i think
> this would be useful for any of you fellow scientists struggling to get
> working their quantum espresso GPU-enabled installations.
>
> I, by no means, do pretend to offer an in-depth description of every and
> all given steps, since i?m no a technology expert; in fact, much of this
> are referred to Urls and documentation found out inside the same used
> installation packages.
>
> The only purpose of this post is to give some useful advice and, mainly,
> to unite all the available information I?ve found in order to get, to the
> present extent of my technology skills, a more-or-less complete and
> comprehensive tutorial containing all what it?s needed to get success about
> QE-GPU installation and usage.
>
> A BIG THANK YOU to the following scholars for all you you advice and key
> assistance to get this done:
>
> PW_FORUM FELLOWS
> Ari Paavo Seitsonen
> Claudio Quarti
>
> All trademarks, copyrights, author ownerships over texts, codes and
> original information links are respected and are the exclusive property of
> their rightfully legitimate owners.
>
> I apologize for any typos/missing vocabulary/redaction errors; I?m not a
> native english speaker.
>
>
> ALL SAID ABOVE , HERE?S WHAT I DID TO GET A FLAWLESS COMPILATION FOR
> QE-GPU BINARIES
>
>
> BASIC HARDWARE/SOFTWARE SYSTEM SETUP EXAMPLE:
>
> -> UBUNTU MATE 16.04 LTS (MANY People posting in the Web recommends MATE
> or another Ubuntu ?Flavor? instead original ubuntu distribution, due to the
> Unity graphic desktop interface suffers of crash issues using Nvidia Cards
> very often)
>
> (NOTE: Why using a graphic interface? Mainly due to get able to use PW-Gui
> for QE binary executables, Virtual NanoLab Interface, XCrysden and another
> auxiliary and useful crystallography tools such as Vesta, and also to be
> able to use remote manager solutions such as Teamviewer.)
>
> -> GPU Card 1: NVIDIA QUADRO K620
> -> GPU Card 2: NVIDIA TESLA K20C
> -> CUDA 7.5
> -> NVIDIA DRIVER 364
> -> QUANTUM ESPRESSO 5.4.0
> -> QE-GPU 5.4.0
>
>
> TESTED pw-gpu.x and ph-gpu.x performance using Barium Titanate and Methyl
> Ammonium Lead Iodide perovskite unit cells (from *.cif crystallographic
> files, exported to QE format using the free Virtual Nanolab GUI and edited
> with the right path for PPs? and outdir folders, and some chosen prefix .)
> - not so very extensive benchmarks, only proof-of-working tests -:
>
> pw-gpu.x works in SCF with full-relativistic PPs and non collinear
> spin-orbit settings, Relax and VC-Relax modes including force and stress
> minimization in vc-relax and relax modes. The nscf mode was not tested, but
> I see no reason this mode would crash)
>
> ph-gpu.x calculates Raman and IR spectra using a fully relaxed cell in
> gamma-only mode, following tutorial examples in the quantum espresso
> packages;
>
> Various automatic k-points grids and e_cutoffs used. Also, the
> ./pw_cutoff.sh tests for optimal e_cutoff for PPS used also works:
> directions posted in http://larrucea.eu/checking-optimum-cutoff-qe/ .
>
> Finally, the nvidia-smi test shows the usage of the Tesla GPU for
> calculations.
>
>
> I?m open to every comments, suggestions and corrections and they all are
> already welcome.
>
>
> ************************************************************
> ************************************************************
> ***********************************
>
>
>
> *********  ---- MAJOR TUTORIAL UPDATE :  AUG 24th, 2016 ---- **********
>
> ############ PRELIMINARY REMARKS: #################
>
> 1 - IF YOU?RE BEHIND A PROXY, after a fresh ubuntu installation, do this
> to get internet connection (for apt-get), setting the following apt config
> via terminal:
>
> $ sudo pluma /etc/apt/apt.conf
>
> Paste into that file, and save-and-close after:
>
> Acquire::http::proxy "http://username:password@proxy-name:8080/";
> Acquire::https::proxy "http://username:password@proxy-name:8080/";
>
> 2 - Setup Internet browser proxy config, to download QE and QE-GPU
> packages:
>
> Set automatic proxy settings with something like the URL:
> http://proxy-name:8080/proxy.pac (consult your web proxy administrator,
> if needed)
>
> 3 -  Install COMPILERS AND Libraries
>
> GCC
> G++
> GFORTRAN
> Intel MKL
> Intel MPI
>
> -> FFTW?s are read from MKL libraries.
>
> OPEN MPI (Follow instructions in the *.tar.gz package downloaded from
> https://www.open-mpi.org) (It?s entirely OPTIONAL if you?re gonna use
> INTEL MPI)
>
> ############ END OF PRELIMINARY REMARKS: #################
>
>
> ############ INSTALLING NVIDIA DRIVER AND CUDA 7.5 SECTION:
> #################
>
> LATEST WORKING CONFIGURATION FOR FUTURE QE-GPU  ./configure step:
>
> INSTALL THE NVIDIA DRIVER 364
> (As stated at http://askubuntu.com/questions/760934/graphics-
> issues-after-installing-ubuntu-16-04-with-nvidia-graphics)
>
> You need the PPA APT link for Nvidia-364 driver. First,  include the
> graphics-drivers debian repositories in Ubuntu software origins. Go to
>
> https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa
>
>
> And copy the PPA link according to your ubuntu distribution.
>
> Adde this link in software origins repositories listing, either by
> terminal , running
>
> $ sudo add-apt-repository ppa:graphics-drivers/ppa and then sudo apt-get
> update
>
> or by the same software origins tab via the software updates window.
>
>
> Then install the driver typing in terminal
>
> $ sudo apt-get install nvidia-364
>
> or by choosing the Nvidia-364 driver in additional drivers tab in the the
> software updates window.
>
> It may happen that, when using a former Nvidia driver, you already are
> experiencing too many graphic glitches, freezing, blinking, or even you are
> not able to log in anymore. To fix this, follow the directions below:
>
>     Log into your account in the TTY.
>         Run $ sudo apt-get purge nvidia-*
>         Run $ sudo add-apt-repository ppa:graphics-drivers/ppa and then
> sudo apt-get update.
>         Run $ sudo apt-get install nvidia-364.
>         Reboot and your graphics issue should be fixed.
>
>     If you are unable to enter a TTY (just a black screen and a blinking
> cursor):
>         Reboot into GRUB.
>         Highlight the Ubuntu option and press e.
>         Add nouveau.modeset=0 to the end of the line beginning with the
> word ?linux?.
>         Press F10 to boot.
>         Follow the instructions above to install the install nvidia-364
> driver.
>
> INSTALL CUDA-7.5:
>
> Type the code given below, as suggested in https://www.pugetsystems.com/
> labs/hpc/NVIDIA-CUDA-with-Ubuntu-16-04-beta-on-a-laptop-
> if-you-just-cannot-wait-775/ :
>
> sudo apt-get install ca-certificates-java default-jre default-jre-headless
> fonts-dejavu-extra freeglut3 freeglut3-dev java-common libatk-wrapper-java
> libatk-wrapper-java-jni  libdrm-dev libgl1-mesa-dev libglu1-mesa-dev
> libgnomevfs2-0 libgnomevfs2-common libice-dev libpthread-stubs0-dev
> libsctp1 libsm-dev libx11-dev libx11-doc libx11-xcb-dev libxau-dev
> libxcb-dri2-0-dev libxcb-dri3-dev libxcb-glx0-dev libxcb-present-dev
> libxcb-randr0-dev libxcb-render0-dev libxcb-shape0-dev libxcb-sync-dev
> libxcb-xfixes0-dev libxcb1-dev libxdamage-dev libxdmcp-dev libxext-dev
> libxfixes-dev libxi-dev libxmu-dev libxmu-headers libxshmfence-dev
> libxt-dev libxxf86vm-dev lksctp-tools mesa-common-dev x11proto-core-dev
> x11proto-damage-de x11proto-dri2-dev x11proto-fixes-dev x11proto-gl-dev
> x11proto-input-dev x11proto-kb-dev x11proto-xext-dev
> x11proto-xf86vidmode-dev xorg-sgml-doctools xtrans-dev libgles2-mesa-dev
> nvidia-modprobe build-essential
>
> (FORM THE LINK: You can "sudo apt-get install" the above list and that
> should get most or all of the dependencies. There is a possibility that
> there will still be missing debs. For example I added the last three
> entries to the list when I discovered that they were missing on my new
> 16.04 install after the others were installed. That's is probably because
> they were already installed on the 15.04 system it ran "apt-get -s install
> cuda" on i.e. they didn't come up as needed dependencies because they were
> already installed.)
>
>         USING CUDA runtime Installer (That?s better than the *.deb file
> option, since you are able to NOT install the packed NVIDIA driver (That
> WUold overwrite the 364 driver) and just install the Cuda Toolkit and
> Samples.
>
> Download the CUDA .run file from the NVIDIA download site.  I used this,
> http://developer.download.nvidia.com/compute/cuda/7.5/
> Prod/local_installers/cuda_7.5.18_linux.run
>
> $ chmod 755 cuda_7.5.18_linux.run
>
> $ sudo ./cuda_7.5.18_linux.run --override
>
> NOTICE: The "--override" is needed so you don't get the fatal error
> saying: Toolkit:  Installation Failed. Using unsupported Compiler,  that
> prompts the installer when finds that GCC is a >4.9 version, and coda seems
> to be incompatible with that.
>
> Be sure to NOT install the NVIDIA driver that is in the .run file since
> you already have a more up to date version installed, as said before.
>
> YOU SHOULD GET FINALLY:
>
> Driver:   Not Selected
> Toolkit:  Installed in /usr/local/cuda-7.5
> Samples:  Installed in /usr/local/cuda-7.5
>
> 3 - To check if CUDA are working and calling to the Tesla card properly,
> open a terminal window and go to /deviceQuery folder:
>
>                 $ cd /home/quantum/NVIDIA_CUDA-7.5_Samples/1_Utilities/
> deviceQuery
>
>         # AND compile deviceQuery executable:
>
>                 $ make
>
>         # Then type
>
>                 $ ./deviceQuery
>
>
>         # If CUDA was properly installed and Tesla card are recognized,
> you should get something like:
>
>
> $ ./deviceQuery Starting...
>
>  CUDA Device Query (Runtime API) version (CUDART static linking)
>
> Detected 2 CUDA Capable device(s)
>
> Device 0: "Tesla K20c"
>   CUDA Driver Version / Runtime Version          6.0 / 6.0
>   CUDA Capability Major/Minor version number:    3.5
>   Total amount of global memory:                 4800 MBytes (5032706048
> bytes)
>   (13) Multiprocessors, (192) CUDA Cores/MP:     2496 CUDA Cores
>   GPU Clock rate:                                706 MHz (0.71 GHz)
>   Memory Clock rate:                             2600 Mhz
>   Memory Bus Width:                              320-bit
>   L2 Cache Size:                                 1310720 bytes
>   Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536,
> 65536), 3D=(4096, 4096, 4096)
>   Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
>   Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048
> layers
>   Total amount of constant memory:               65536 bytes
>   Total amount of shared memory per block:       49152 bytes
>   Total number of registers available per block: 65536
>   Warp size:                                     32
>   Maximum number of threads per multiprocessor:  2048
>   Maximum number of threads per block:           1024
>   Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
>   Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
>   Maximum memory pitch:                          2147483647 bytes
>   Texture alignment:                             512 bytes
>   Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
>   Run time limit on kernels:                     No
>   Integrated GPU sharing Host Memory:            No
>   Support host page-locked memory mapping:       Yes
>   Alignment requirement for Surfaces:            Yes
>   Device has ECC support:                        Enabled
>   Device supports Unified Addressing (UVA):      Yes
>   Device PCI Bus ID / PCI location ID:           4 / 0
>   Compute Mode:
>      < Default (multiple host threads can use ::cudaSetDevice() with
> device simultaneously) >
>
> Device 1: "Quadro K620"
>   CUDA Driver Version / Runtime Version          6.0 / 6.0
>   CUDA Capability Major/Minor version number:    5.0
>   Total amount of global memory:                 2048 MBytes (2147155968
> bytes)
>   ( 3) Multiprocessors, (128) CUDA Cores/MP:     384 CUDA Cores
>   GPU Clock rate:                                1124 MHz (1.12 GHz)
>   Memory Clock rate:                             900 Mhz
>   Memory Bus Width:                              128-bit
>   L2 Cache Size:                                 2097152 bytes
>   Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536,
> 65536), 3D=(4096, 4096, 4096)
>   Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
>   Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048
> layers
>   Total amount of constant memory:               65536 bytes
>   Total amount of shared memory per block:       49152 bytes
>   Total number of registers available per block: 65536
>   Warp size:                                     32
>   Maximum number of threads per multiprocessor:  2048
>   Maximum number of threads per block:           1024
>   Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
>   Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
>   Maximum memory pitch:                          2147483647 bytes
>   Texture alignment:                             512 bytes
>   Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
>   Run time limit on kernels:                     Yes
>   Integrated GPU sharing Host Memory:            No
>   Support host page-locked memory mapping:       Yes
>   Alignment requirement for Surfaces:            Yes
>   Device has ECC support:                        Disabled
>   Device supports Unified Addressing (UVA):      Yes
>   Device PCI Bus ID / PCI location ID:           3 / 0
>   Compute Mode:
>      < Default (multiple host threads can use ::cudaSetDevice() with
> device simultaneously) >
> > Peer access from Tesla K20c (GPU0) -> Quadro K620 (GPU1) : No
> > Peer access from Quadro K620 (GPU1) -> Tesla K20c (GPU0) : No
>
> deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.0, CUDA Runtime
> Version = 6.0, NumDevs = 2, Device0 = Tesla K20c, Device1 = Quadro K620
> Result = PASS
> ------------------------------------------------------------
> ------------------------------------------------------------
> -------------------------
>
> 4 - It seems not necessary to install the basic versions  fftw3, mpi
> (message passing interface standards) and MPICH libraries, however. In
> Software Center, search for each one, but install the -dev options, because
> quantum espresso and QE-GPU compilers make calls to *.dev libraries and NOT
> to normal libraries (e. g., call to libfftw3-mpi-dev and NOT to
> libfftw3-mpi3 !!!)
>
> 5 - The phiGEMM package comes bundled with the QE-GPU package, then IT is
> NOT necessary to download the phigemm package separately.
>
> 6 -  in ./configure, it seems that the openmp option blocks usage of
> lapack and fftw3 libraries. Need some research switching ./configure
> parameters and flags in order to get optimal configuration and best make
> final builds. ?> ANSWER: USE openmp with intel MKL and MPI libraries.
>
>
> ############ END OF INSTALLING NVIDIA DRIVER AND CUDA 7.5 SECTION
> #################
>
>
> POST-INSTALLATION CHECKS for CUDA and Tesla card proper operation:
>
> ( Remember that ?caja? replaces nautilus and ?pluma? replaces gedit IN
> UBUNTU MATE )
>
> PLEASE take in account the following, AFTER INSTALLING CUDA-7.5
>
> edit /usr/local/cuda/include/host_config.h and comment out line 115:
>
> $ sudo pluma /usr/local/cuda/include/host_config.h
>
> line: 115 comment out error
> //#error -- unsupported GNU version! gcc versions later than 4.9 are not
> supported!
>
> THAT PREVENTS FATAL ERRORS for the QE-GPU compilation steps, since Cuda
> 7.5 and  gcc > 5+ are INCOMPATIBLE.
>
>
> **** PRE-QE-GPU CONFIGURATION Requirements: **********
>
>
> 1. MAKE SURE YOU HAVE INSTALLED ALL THE REQUIRED LIBRARIES
>
> 2. DECLARE some Environment variables (by pasting as bottom lines in
> .bashrc): (examples given, replace with the actual paths for your system)
>
> $ export PATH=/home/quantum/Descargas/QE-5.4/espresso-5.4.0:$PATH
> $ export PATH=/home/quantum/Descargas/QE-5.4/PWgui-5.4.0:$PATH
> $ export PHI_DGEMM_SPLIT=0.950
> $ export PHI_ZGEMM_SPLIT=0.950
> $ export PATH=/usr/local/cuda-7.5/bin:$PATH
> $ export LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:$LD_LIBRARY_PATH
> $ source /opt/intel/bin/compilervars.sh intel64
> $ export PATH=/opt/intel/bin:$PATH
> $ export LD_LIBRARY_PATH=/opt/intel/lib/intel64:$LD_LIBRARY_PATH
> $ export LD_LIBRARY_PATH=/opt/intel/compilers_and_libraries_2016.
> 2.181/linux/mpi/intel64/lib:$PATH
>
> 3. Rebuild library configuration (examples given, replace with the actual
> paths):
>
>         Open a nautilus (MATE: caja) windows AS ROOT (nautilus: graphical
> file explorer for debian-based linux distros with Unity desktop; Caja=
> Nautilus equivalent in MATE desktops flavor)
>
>         Note: You may need to install gksu previously : sudo apt-get
> install gksu
>
>                 $ sudo gksu nautilus (sudo gksu caja)
>
>         Go to the file /etc/ld.so.conf.d/x86_64-linux-gnu.conf
>
>         put the necessary links one per line at the end of other existing
> lines (example):
>
>         Multiarch support
>                 /lib/x86_64-linux-gnu
>                 /usr/lib/x86_64-linux-gnu
>                 /usr/local/cuda-7.5/lib64
>                 /opt/intel/composer_xe_2015.3.187/mkl/lib/intel64
>                 /opt/intel/compilers_and_libraries_2016.3.223/linux/
> mkl/lib/intel64_lin
>                 /opt/intel/compilers_and_libraries/linux/mkl/lib/
> intel64_lin
>                 /opt/intel/compilers_and_libraries_2016.3.223/linux/
> mpi/intel64/lib
>                 /opt/intel/compilers_and_libraries_2016.3.223/linux/
> mpi/intel64/lib
>                 /opt/intel/compilers_and_libraries_2016.3.223/linux/
> mkl/interfaces/fftw3xc
>                 /opt/intel/compilers_and_libraries_2016.3.223/linux/
> mkl/interfaces/fftw3x_cdft
>                 /opt/intel/compilers_and_libraries_2016.3.223/linux/
> mkl/interfaces/fftw3x_cdft/obj_intel64_lp64
>                 /opt/intel/compilers_and_libraries_2016.3.223/linux/
> mkl/interfaces/fftw3xf
>
>         save file and exit the nautilus window.
>
>         Ready to Rebuild library configuration....
>
>         In terminal:
>
>                 $ sudo ldconfig
>
>         Check if libraries are properly installed and listed:
>
>                 $ ldconfig -p |less #Scroll lines with the mouse wheel,
> when "END" appears, press q
>
>         MAKE SURE YOU SEE MKL, OPENMP, CUDA AND INTEL MPI libraries
> (libmkl?) and exit terminal.
>
> **** END OF PRE-QE-GPU CONFIGURATION Requirements **********
>
>
> **** AND AT LEAST, QE-GPU CONFIGURATION ****
>
>
> DOWNLOAD THE QUANTUM ESPRESSO AND QE-GPU PACKAGES FROM THEIR RESPECTIVE
> URLs.
>
> ** NOTE: AN ERROR ABOUT TOO MANY DIRECTORY LEVELS COMPLAINS OFTEN. INSTALL
> ESPRESSO IN A ROOT DIRECTORY TO AVOID IT. (/home )  **
>
> 0.1 Unpack the espresso-5.4.0 tar.gz package in /home/your-username or
> just in /home .
>
> 0.2 Move the packages
>
>         atomic-5.4.0.tar.gz
>         GWW-5.4.0.tar.gz
>         neb-5.4.0.tar.gz
>         PHonon-5.4.0.tar.gz
>         pwcond-5.4.0.tar.gz
>         tddfpt-5.4.0.tar.gz
>         xspectra-5.4.0.tar.gz
>
> to the ?Archives? Folder in the espresso root directory.
>
> 1. Copy QE-GPU in espresso directory
>
>         Move to the espresso root directory, uncompress the archive
> $ tar zxvf QE-GPU-<TAG-NAME>.tar.gz
>
>         create a symbolic link with the name GPU
> $ ln -s QE-GPU-<TAG-NAME> GPU
>
>         Replace <TAG-GPU> with the ACTUAL TAG name/id (example: 5.4.0 )
>
> 2. Run QE-GPU configure (in terminal, from GPU dir):
>
>         NOTICE: I did no use the ?with-scalapack option because this is no
> a cluster installation.
>
> $ cd /opt/intel/compilers_and_libraries_2016.2.181/linux/mkl/bin
>
> $ source mklvars.sh intel64 lp64
>
>         check that INTEL MPI is running the mpirun protocol, typing in the
> same terminal:
>
> $ cd /opt/intel/compilers_and_libraries_2016.3.223/linux/mpi/intel64/bin
>
> quantum at quantum-Precision-Tower-7810:/opt/intel/
> compilers_and_libraries_2016.3.223/linux/mpi/intel64/bin$ source
> mpivars.sh release
>
> quantum at quantum-Precision-Tower-7810:/opt/intel/
> compilers_and_libraries_2016.3.223/linux/mpi/intel64/bin$mpirun
>
>         The mpirun command above gives a lot of lines that should end in
> something kinda:
>
> Intel(R) MPI Library for Linux* OS, Version 5.1.3 Build 20160601 (build
> id: 15562)
> Copyright (C) 2003-2016, Intel Corporation. All rights reserved.
>
>
>         NOW GO BACK TO THE GPU FOLDER INSIDE espresso-x.x.x folder:
>
> $ cd /home/quantum/Descargas/QE-5.4/espresso-5.4.0/GPU
>
> ./configure --enable-parallel --enable-openmp --enable-cuda
> ?-without-scalapack with-gpu-arch=sm_35 --with-cuda-dir=/usr/local/cuda-7.5/bin
> --without-magma --with-phigemm
>
>
> 3. The ./configure command should create new files in the espresso root
> folder:
>
>                 Make.sys
>
>                 makefile.gpu
>
>         # Since your are in terminal inside /home/quantum/Descargas/QE-5.4/espresso-5.4.0/GPU
> folder , type
>
>                 cd ..
>
>         # To come back to cd /home/quantum/Descargas/QE-5.4/espresso-5.4.0
> (in terminal).
>
> 4. ALERT: Before doing Make, edit the Make.sys:
>
>         - if you are using Intel MPI, please add to DFLAGS
> "-DMPICH_SKIP_MPICXX" to make.sys DFLAGS
>         ignore MPI C++ bindings.
>
>         ADD to THE make.sys FILE THE FOLLOWING LD_LIBS flags:
>
>         -L/usr/lib64 -lstdc++
>
>         ??> THE ABOVE LINE IS CRUCIAL TO AVOID ERRORS during doing make
> compilation, errors such as
>
>         ?> ERROR: Too many symbolic link levels
>         ?> stdc++ errors
>         ?> A too buggy compilation
>         ?> Compilation finishes, but the pw-gpu.x or ph-gpu.x executables
> does not work properly
>
>         ...AND DON?T FORGET to add to the NVCC line the flag
> -D_FORCE_INLINES in make.sys:
>
>         ...
>         NVCCFLAGS        = -O3 -gencode arch=compute_35,code=sm_35
> -D_FORCE_INLINES
>
>
>         to make.sys for ph-gpu.x compilation!
>
>
>  -> NOTICE: May be too naive or not worthy to comment about, but DO NOT
> attempt to compile ph-gpu.x BEFORE to compile pw-gpu.x. If you do so,
> you?ll SURELY get a bunch of fatal errors saying something like
>
>         error: modules not found.
>
> If you do not use the all-gpu option by any reason, use THE VERY SAME
> logical compilation order
>
> make -f Makefile.gpu pw-gpu.x
> make -f Makefile.gpu ph-gpu.x
> make -f Makefile.gpu neb-gpu.x (if needed)
>
>
> 5. FINAL: To build pw-gpu.x, ph-gpu.x and neb-gpu.x executables:
>
>                 make -f Makefile.gpu all-gpu
>
>
>
>
>                                 ****  sMISSION COMPLETE. ****
>
>
>
> *** RAMAN: POST-PROCESSING THE *.DMAT FILES THAT ph.x / ph-gpu.x CREATES :
> *** (always remember to type the actual paths)
>
> create a <custom-name>.dm.in file containing (examples given):
>
> &input fildyn='/home/quantum/PWgui-5.4.0/dmat.Mapbi3pospress',
> asr='simple' /
>
> then, in a terminal, go to dynmat.x location:
>
> $ cd /home/quantum/Descargas/QE-5.4-NO-GPU/espresso-5.4.0/bin
>
> and type
>
> $ ./dynmat.x < /home/quantum/PWgui-5.4.0/mapbi3pospress.dm.in >
> /home/quantum/PWgui-5.4.0/mapbi3pospress-asr-simple.dm.out
>
> note that in the output file, "asr-simple" points out the acoustic sum
> rule used;
> there are three basic options available:
>
> simple
> crystal
> zero-dim
>
> You should known which is best for each material you study (ask your
> tutor/supervisor).
>
> The resulting file, as in the example, mapbi3pospress-asr-simple.dm.out,
> contains the frequencies and intensities for both IR an Raman spectra,
> ready to plot.
>
>
> ****  ---- END OF TUTORIAL - MAJOR UPDATE :  AUG 24TH, 2016 ---- ****
>
>
> THAT?S ALL , THANKS FOR READING AND COMMENTING.
>
>
> KIND REGARDS,
>
> Josu? Clavijo
> Universidad Nacional de Colombia
> Science College
> Chemistry Department
>
> ************************************************************
> ************************************************************
> ***********************************
>
> LINKS:
>
> http://askubuntu.com/questions/760934/graphics-issues-after-installing-
> ubuntu-16-04-with-nvidia-graphics
>
> https://blog.levilentz.com/?p=312
>
> https://www.pugetsystems.com/labs/hpc/NVIDIA-CUDA-with-
> Ubuntu-16-04-beta-on-a-laptop-if-you-just-cannot-wait-775/
>
> ?> PPA APT link for the Nvidia-364 driver repository:
>
> https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa
>
>
> ------------------------------
>
> Message: 5
> Date: Thu, 25 Aug 2016 08:07:45 +0800
> From: Rolly Ng <rollyng at gmail.com>
> Subject: Re: [Pw_forum] Tutorial: Install QE-GPU binaries in latest
>         ubuntu systems (v. 16.04), Nvidia Cuda 7.5 , Intel MKL and Intel
> MPI
>         software with NVIDIA cards hardware
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID: <f9981ea0-3efb-9b5f-8434-bc037479d611 at gmail.com>
> Content-Type: text/plain; charset="windows-1252"
>
> Dear Josu?,
>
> Very nice and detail steps. Thank you!
>
> I would like to add that it also works for QE ver 5.3.0 on OpenSUSE 13.2.
>
> Supported GPU including:
>
> 1) Tesla C2050
>
> 2) Tesla C2070/75
>
> 3) Titan Z/Black/Original
>
> Regards,
>
> Rolly
>
>
> On 08/25/2016 04:21 AM, Josue Itsman Clavijo Penagos wrote:
> > Dear all,
> >
> > I?m posting the following rather introductory, not professional-level
> > tutorial for Installing QE-GPU binaries In recent ubuntu systems using
> > Nvidia Cuda, Intel MKL, Intel MPI software and NVIDIA GPU kepler-model
> > cards hardware, since i think this might be useful for any of you
> > fellow scientists struggling to get working their quantum espresso
> > GPU-enabled installations.
> >
> > I, by no means, do pretend to offer an in-depth description of every
> > and all given steps, since i?m no a technology expert; in fact, much
> > of this are referred to mail discussions, Urls and documentation
> > readings found out inside the same used installation packages.
> >
> > *The only purpose of this post* is to give some useful advice and,
> > mainly, to unify and share to whom may be interested all the available
> > information I?ve found in order to get, to the present extent of my
> > technology skills, a more-or-less complete and comprehensive tutorial
> > containing all what it?s needed to get success about QE-GPU
> > installation and usage.
> >
> > A big *thank you* to the following scholars for all of your highly
> > valuable advice and key assistance to get this done:
> >
> >
> > *Ari Paavo Seitsonen*
> > *Claudio Quarti*
> > and all of you
> > *PW_FORUM FELLOWS*
> > *
> > *
> > *
> > *
> > All trademarks, copyrights and author ownerships over texts, codes and
> > original information links *are respected and are the exclusive
> > property of their rightfully legitimate owners*.
> >
> > I apologize for any typos or vocabulary/redaction errors; I?m not a
> > native english speaker.
> >
> >
> > All said above, in the attached file I share what I did to get a
> > flawless compilation.
> >
> > *Best regards, *
> >
> > Josu? Clavijo, Dr. Sc. in Chemistry
> > Assistant Professor
> > Universidad Nacional de Colombia
> > Science College
> > Chemistry Department
> >
> >
> > _______________________________________________
> > Pw_forum mailing list
> > Pw_forum at pwscf.org
> > http://pwscf.org/mailman/listinfo/pw_forum
>
> --
> PhD. Research Fellow,
> Dept. of Physics & Materials Science,
> City University of Hong Kong
> Tel: +852 3442 4000
> Fax: +852 3442 0538
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://pwscf.org/pipermail/pw_forum/attachments/20160825/
> 6f3a4400/attachment-0001.html
>
> ------------------------------
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
> End of Pw_forum Digest, Vol 109, Issue 25
> *****************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20160831/48070ae4/attachment.html>


More information about the users mailing list