[Pw_forum] Pw_forum Digest, Vol 95, Issue 13
Umesh Roy
umesh24crp at gmail.com
Sun Jun 14 15:40:55 CEST 2015
Dear Sir,
Yes. I considered spin-orbit coupling and for that the dynamical
matrices are written in xml format as mentioned in the program. Later the
problem was solved by adding ".xml" in the dynamical matrix file of the
input.
Previous input for q2r.in was
&input
fildyn='au.dyn', filfrc='au888'
which does not work.
But as I add .xml in the name of fildyn then it works.
&input
fildyn='au.dyn.xml', filfrc='au888
Anyway thank you for your reply.
Regards
*---------------------------------------------------------------------Umesh
Chandra RoyResearch Scholar, School of Physical SciencesJawaharlal Nehru
University, New Delhi-110067,*
*India.*
*Email:umesh24crp at gmail.com <Email%3Aumesh24crp at gmail.com>*
*Mobile:+919868022722*
On Sun, Jun 14, 2015 at 3:30 PM, <pw_forum-request at pwscf.org> wrote:
> Send Pw_forum mailing list submissions to
> pw_forum at pwscf.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://pwscf.org/mailman/listinfo/pw_forum
> or, via email, send a message with subject or body 'help' to
> pw_forum-request at pwscf.org
>
> You can reach the person managing the list at
> pw_forum-owner at pwscf.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Pw_forum digest..."
>
>
> Today's Topics:
>
> 1. [qe-gpu] (Anubhav Kumar)
> 2. Re: [qe-gpu] (Filippo Spiga)
> 3. Bug (or not) with epsil = .false. in PH (trunk version)
> (Samuel Ponc?)
> 4. xml format for dynamical matrix (Umesh Roy)
> 5. Re: xml format for dynamical matrix (Lorenzo Paulatto)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 13 Jun 2015 15:38:30 +0530
> From: "Anubhav Kumar" <kanubhav at iitk.ac.in>
> Subject: [Pw_forum] [qe-gpu]
> To: pw_forum at pwscf.org
> Message-ID:
> <a09d6ef04630522dbc9a233c339a7dcb.squirrel at webmail.iitk.ac.in>
> Content-Type: text/plain;charset=iso-8859-1
>
> Dear QE users
>
> I have configured qe-gpu 14.10.0 with espresso-5.1.2.Parallel compilation
> was successful, but when i run ./pw-gpu.x it gives the following output
>
> ***WARNING: unbalanced configuration (1 MPI per node, 3 GPUs per node)
>
> *******************************************************************
>
> GPU-accelerated Quantum ESPRESSO (svn rev. unknown)
> (parallel: Y , MAGMA : N )
>
> *******************************************************************
>
>
> Program PWSCF v.5.1.2 starts on 13Jun2015 at 15:23:59
>
> This program is part of the open-source Quantum ESPRESSO suite
> for quantum simulation of materials; please cite
> "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
> URL http://www.quantum-espresso.org",
> in publications or presentations arising from this work. More details
> at
> http://www.quantum-espresso.org/quote
>
> Parallel version (MPI & OpenMP), running on 24 processor cores
> Number of MPI processes: 1
> Threads/MPI process: 24
> Waiting for input...
>
>
> However when i again run the same command, it gives
>
> ***WARNING: unbalanced configuration (1 MPI per node, 3 GPUs per node)
>
> Program received signal SIGSEGV: Segmentation fault - invalid memory
> reference.
>
> Backtrace for this error:
> #0 0x7FB5001B57D7
> #1 0x7FB5001B5DDE
> #2 0x7FB4FF4C4D3F
> #3 0x7FB4F3391D40
> #4 0x7FB4F33666C3
> #5 0x7FB4F3364C80
> #6 0x7FB4F33759EF
> #7 0x7FB4F345CA1F
> #8 0x7FB4F345CD2F
> #9 0x7FB500B7DBCC
> #10 0x7FB500B7094F
> #11 0x7FB500B7CC56
> #12 0x7FB500B81410
> #13 0x7FB500B7507B
> #14 0x7FB500B6179D
> #15 0x7FB500B940A0
> #16 0x7FB5009BA047
> #17 0x8A4EA3 in phiGemmInit
> #18 0x76F55E in initcudaenv_
> #19 0x66AE90 in __mp_MOD_mp_start at mp.f90:184
> #20 0x66E192 in __mp_world_MOD_mp_world_start at mp_world.f90:58
> #21 0x66DCC0 in __mp_global_MOD_mp_startup at mp_global.f90:65
> #22 0x4082A0 in pwscf at pwscf.f90:23
> #23 0x7FB4FF4AFEC4
> Segmentation fault
>
> Kindly help me out in solving the problem. My GPU details are
>
> +------------------------------------------------------+
> | NVIDIA-SMI 346.46 Driver Version: 346.46 |
>
> |-------------------------------+----------------------+----------------------+
> | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr.
> ECC |
> | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute
> M. |
>
> |===============================+======================+======================|
> | 0 Tesla C2050 Off | 0000:02:00.0 On |
> 0 |
> | 30% 62C P12 N/A / N/A | 87MiB / 2687MiB | 0%
> Default |
>
> +-------------------------------+----------------------+----------------------+
> | 1 Tesla K20c Off | 0000:83:00.0 Off |
> 0 |
> | 42% 55C P0 46W / 225W | 4578MiB / 4799MiB | 0%
> Default |
>
> +-------------------------------+----------------------+----------------------+
> | 2 Tesla K20c Off | 0000:84:00.0 Off |
> 0 |
> | 34% 46C P8 17W / 225W | 14MiB / 4799MiB | 0%
> Default |
>
> +-------------------------------+----------------------+----------------------+
>
>
> +-----------------------------------------------------------------------------+
> | Processes: GPU
> Memory |
> | GPU PID Type Process name Usage
> |
>
> |=============================================================================|
> | 1 27680 C ./pw-gpu.x
> 4563MiB |
>
> +-----------------------------------------------------------------------------+
>
>
> ------------------------------
>
> Message: 2
> Date: Sat, 13 Jun 2015 11:51:25 +0100
> From: Filippo Spiga <spiga.filippo at gmail.com>
> Subject: Re: [Pw_forum] [qe-gpu]
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID: <560CDBF8-1CF6-4FD6-AD77-FECF1969B19B at gmail.com>
> Content-Type: text/plain; charset=utf-8
>
> Dear Anubhav,
>
> run in parallel, 2 MPI and make sure CUDA_VISIBLE_DEVICES is set such
>
> MPI rank 0 -> GPU id 1 (K20)
> MPI rank 1 -> GPU id 2 (K20)
>
> Those K20 GPU are active cooled cards, how many sockets this server (or
> workstation?) have?
>
> F
>
> > On Jun 13, 2015, at 11:08 AM, Anubhav Kumar <kanubhav at iitk.ac.in> wrote:
> >
> > Dear QE users
> >
> > I have configured qe-gpu 14.10.0 with espresso-5.1.2.Parallel compilation
> > was successful, but when i run ./pw-gpu.x it gives the following output
> >
> > ***WARNING: unbalanced configuration (1 MPI per node, 3 GPUs per node)
> >
> > *******************************************************************
> >
> > GPU-accelerated Quantum ESPRESSO (svn rev. unknown)
> > (parallel: Y , MAGMA : N )
> >
> > *******************************************************************
> >
> >
> > Program PWSCF v.5.1.2 starts on 13Jun2015 at 15:23:59
> >
> > This program is part of the open-source Quantum ESPRESSO suite
> > for quantum simulation of materials; please cite
> > "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
> > URL http://www.quantum-espresso.org",
> > in publications or presentations arising from this work. More
> details at
> > http://www.quantum-espresso.org/quote
> >
> > Parallel version (MPI & OpenMP), running on 24 processor cores
> > Number of MPI processes: 1
> > Threads/MPI process: 24
> > Waiting for input...
> >
> >
> > However when i again run the same command, it gives
> >
> > ***WARNING: unbalanced configuration (1 MPI per node, 3 GPUs per node)
> >
> > Program received signal SIGSEGV: Segmentation fault - invalid memory
> > reference.
> >
> > Backtrace for this error:
> > #0 0x7FB5001B57D7
> > #1 0x7FB5001B5DDE
> > #2 0x7FB4FF4C4D3F
> > #3 0x7FB4F3391D40
> > #4 0x7FB4F33666C3
> > #5 0x7FB4F3364C80
> > #6 0x7FB4F33759EF
> > #7 0x7FB4F345CA1F
> > #8 0x7FB4F345CD2F
> > #9 0x7FB500B7DBCC
> > #10 0x7FB500B7094F
> > #11 0x7FB500B7CC56
> > #12 0x7FB500B81410
> > #13 0x7FB500B7507B
> > #14 0x7FB500B6179D
> > #15 0x7FB500B940A0
> > #16 0x7FB5009BA047
> > #17 0x8A4EA3 in phiGemmInit
> > #18 0x76F55E in initcudaenv_
> > #19 0x66AE90 in __mp_MOD_mp_start at mp.f90:184
> > #20 0x66E192 in __mp_world_MOD_mp_world_start at mp_world.f90:58
> > #21 0x66DCC0 in __mp_global_MOD_mp_startup at mp_global.f90:65
> > #22 0x4082A0 in pwscf at pwscf.f90:23
> > #23 0x7FB4FF4AFEC4
> > Segmentation fault
> >
> > Kindly help me out in solving the problem. My GPU details are
> >
> > +------------------------------------------------------+
> > | NVIDIA-SMI 346.46 Driver Version: 346.46 |
> >
> |-------------------------------+----------------------+----------------------+
> > | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr.
> > ECC |
> > | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util
> Compute
> > M. |
> >
> |===============================+======================+======================|
> > | 0 Tesla C2050 Off | 0000:02:00.0 On |
> > 0 |
> > | 30% 62C P12 N/A / N/A | 87MiB / 2687MiB | 0%
> > Default |
> >
> +-------------------------------+----------------------+----------------------+
> > | 1 Tesla K20c Off | 0000:83:00.0 Off |
> > 0 |
> > | 42% 55C P0 46W / 225W | 4578MiB / 4799MiB | 0%
> > Default |
> >
> +-------------------------------+----------------------+----------------------+
> > | 2 Tesla K20c Off | 0000:84:00.0 Off |
> > 0 |
> > | 34% 46C P8 17W / 225W | 14MiB / 4799MiB | 0%
> > Default |
> >
> +-------------------------------+----------------------+----------------------+
> >
> >
> +-----------------------------------------------------------------------------+
> > | Processes: GPU
> > Memory |
> > | GPU PID Type Process name Usage
> > |
> >
> |=============================================================================|
> > | 1 27680 C ./pw-gpu.x
> > 4563MiB |
> >
> +-----------------------------------------------------------------------------+
> > _______________________________________________
> > Pw_forum mailing list
> > Pw_forum at pwscf.org
> > http://pwscf.org/mailman/listinfo/pw_forum
>
> --
> Mr. Filippo SPIGA, M.Sc.
> http://fspiga.github.io ~ skype: filippo.spiga
>
> ?Nobody will drive us out of Cantor's paradise.? ~ David Hilbert
>
> *****
> Disclaimer: "Please note this message and any attachments are CONFIDENTIAL
> and may be privileged or otherwise protected from disclosure. The contents
> are not to be disclosed to anyone other than the addressee. Unauthorized
> recipients are requested to preserve this confidentiality and to advise the
> sender immediately of any error in transmission."
>
>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Sat, 13 Jun 2015 16:38:36 +0100
> From: Samuel Ponc? <samuel.pon at gmail.com>
> Subject: [Pw_forum] Bug (or not) with epsil = .false. in PH (trunk
> version)
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID:
> <
> CAESzT+7K5m6_bMcOxDeDUZu8utANY+mydgZVft+kQq6xg37W7g at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear all,
>
> I found out that it was impossible de make a phonon calculation without
> calculating the Born effective charge in a semi-conductor even if we set
> epsil = .false.
>
> This is due to the routine prepare_q.f90 that is called inside
> do_phonon.f90
> In the routine there is
> IF ( lgamma ) THEN
> !
> IF ( .NOT. lgauss ) THEN
> !
> ! ... in the case of an insulator at q=0 one has to calculate
> ! ... the dielectric constant and the Born eff. charges
> ! ... the other flags depend on input
> !
> epsil = .TRUE.
> zeu = .TRUE.
> zue = .TRUE.
>
>
> This means that if we compute q=Gamma and if we do not have
> gaussian-smearing (i.e. an semi-cond or insulator), then epsil is
> automatically set to TRUE.
>
> I know that physically one should have such LO/TO splitting but the user
> should be able to choose.
>
> Maybe this forcing could be reported in the input variable ? or simply put
> the default value to .TRUE. instead of false and not enforcing that rule?
>
> What do you think?
>
> Best,
>
> Samuel Ponce,
> Department of Materials, University of Oxford
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20150613/98210b28/attachment-0001.html
>
> ------------------------------
>
> Message: 4
> Date: Sun, 14 Jun 2015 12:06:37 +0530
> From: Umesh Roy <umesh24crp at gmail.com>
> Subject: [Pw_forum] xml format for dynamical matrix
> To: pw_forum at pwscf.org
> Message-ID:
> <
> CAHj-Tq3jXy3Of2ecE8_nwEL-pnaxzjxD2NihDkOZV-SZYc6eaA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear All,
> I want to calculate phonon for q grid nq1=8, nq2=8, nq3=8 for
> Gold. As I run the program for phonon , the dynamical matrices are written
> in the xml format. So could not able to get the interatomic force
> constant(IFC) from there. Why are dynamical matrices written in the xml
> format? How to get IFC from there? Please help.
>
> Thank you in advance.
>
>
>
>
>
>
>
>
>
>
> *---------------------------------------------------------------------Umesh
> Chandra RoyResearch Scholar, School of Physical SciencesJawaharlal Nehru
> University, New Delhi-110067,*
>
> *India.*
>
> *Email:umesh24crp at gmail.com <Email%3Aumesh24crp at gmail.com>*
> *Mobile:+919868022722*
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://pwscf.org/pipermail/pw_forum/attachments/20150614/40fb5256/attachment-0001.html
>
> ------------------------------
>
> Message: 5
> Date: Sun, 14 Jun 2015 08:47:12 +0200
> From: Lorenzo Paulatto <lorenzo.paulatto at impmc.upmc.fr>
> Subject: Re: [Pw_forum] xml format for dynamical matrix
> To: PWSCF Forum <pw_forum at pwscf.org>
> Message-ID: <557D2370.3070805 at impmc.upmc.fr>
> Content-Type: text/plain; charset=windows-1252
>
> On 14/06/2015 08:36, Umesh Roy wrote:
> > Dear All,
> > I want to calculate phonon for q grid nq1=8, nq2=8, nq3=8
> > for Gold. As I run the program for phonon , the dynamical matrices are
> > written in the xml format. So could not able to get the interatomic
> > force constant(IFC) from there. Why are dynamical matrices written in
> > the xml format? How to get IFC from there? Please help.
>
> If I remember correctly, they are written in xml format if you use spin
> orbit. But q2r can read them, it does not prevent you to generate the
> force constants.
>
> As always when asking for help you should provide all the information
> you dispose, in order to get a meaningful answer. In particular:
> 1. what you did (i.e. input files, command lines)
> 2. what you got (i.e. output files, matdyn files)
> 3. what you expected to get
> 4. why you think 3 and 4 are different
>
> kind regards
>
>
>
> --
> Dr. Lorenzo Paulatto
> IdR @ IMPMC -- CNRS & Universit? Paris 6
> +33 (0)1 44 275 084 / skype: paulatz
> http://www.impmc.upmc.fr/~paulatto/
> 23-24/4?16 Bo?te courrier 115, 4 place Jussieu 75252 Paris C?dex 05
>
>
>
> ------------------------------
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
> End of Pw_forum Digest, Vol 95, Issue 13
> ****************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20150614/236de671/attachment.html>
More information about the users
mailing list