[Pw_forum] PHonon Raman Spectra error

Raymond Gasper raymond.j.gasper at gmail.com
Tue Sep 29 17:38:03 CEST 2015


-Compiled using mpif90 version 1.2.7, running using the associated mpirun
(if that makes sense. If that's not sufficient I can dig further).
-LSF scheduler- I'm submitting via an input script that looks as follows:
------------------------------------------------------------------------
#!/bin/bash
#PBS -V
#PBS -j oe
#PBS -q low
#PBS -N VVVtest
#PBS -l nodes=master:ppn=32

cd $PBS_O_WORKDIR
NPROCS=`wc -l < $PBS_NODEFILE`
echo "This job has allocated ${NPROCS} nodes"
./run_example
-------------------------------------------------------------------------


The output file for alas.ph.out is:
------------------------------------------------------------------------
running /home/ray/espresso-5.1.2/bin/ph.x on 32 LINUX ch_gen2 processors

     Program PHONON v.5.1.2 starts on 22Sep2015 at 12: 7:37

     This program is part of the open-source Quantum ESPRESSO suite
     for quantum simulation of materials; please cite
         "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
          URL http://www.quantum-espresso.org",
     in publications or presentations arising from this work. More details
at
     http://www.quantum-espresso.org/quote

     Parallel version (MPI), running on    32 processors
     path-images division:  nimage    =       4
     K-points division:     npool     =       4
     R & G space division:  proc/nbgrp/npool/nimage =       2

     Reading data from directory:
     /home/ray/tmp/alas.save

   Info: using nr1, nr2, nr3 values from input

   Info: using nr1s, nr2s, nr3s values from input

     IMPORTANT: XC functional enforced from input :
     Exchange-correlation      =  SLA  PZ   NOGX NOGC ( 1  1  0  0 0 0)
     Any further DFT definition will be discarded
     Please, verify this is what you really want
     Parallelization info
     --------------------
     sticks:   dense  smooth     PW     G-vecs:    dense   smooth      PW
     Min          75      75     30                  620      620     153
     Max          76      76     31                  623      623     154
     Sum         151     151     61                 1243     1243     307


     Error in routine phq_readin (1):
     pw.x run with a different number of processors. Use wf_collect=.true.

     Error in routine phq_readin (1):
     stopping ...
---------------------------------------------------------------------------------------------

alas.ph.rec.out looks identical. I think it appears I'm using the wrong
parallelization somehow- I'm still unfamiliar with the Quantum Espresso
input files so I apologize if this is a silly question!

Ray Gasper
Computational Nanomaterials Laboratory
ELab 204
Chemical Engineering
University of Massachusetts Amherst
402-990-4900

On Tue, Sep 29, 2015 at 11:11 AM, nicola varini <nicola.varini at epfl.ch>
wrote:

> Dear Ray, a couple o questions:
> -which mpi and compiler version are you using?
> -which scheduler?
> -how do you submit your calculation?
> -Can you please send the output file that has been generated?
>
> Regards,
>
> Nicola
>
>
>
> On 29/09/15 17:01, Raymond Gasper wrote:
>
> Hi Pw_forum, I have a problem I haven't been able to solve:
>
> I'm trying to the get the PHonon package to work for developing Raman
> spectra, and can't get the example to work. I've tried checking the archive
> and googling but cannot find a solution. I'm using QE version 5.1.2, and
> consistently get this or a similar error on only example 5:
> --------------------------------------------
> This job has allocated 32 nodes
>
> /home/ray/espresso-5.1.2/PHonon/examples/example05 : starting
>
> This example shows how to use pw.x and ph.x to calculate
> the Raman tensor for AlAs.
>
>   executables directory: /home/ray/espresso-5.1.2/bin
>   pseudo directory:      /home/ray/espresso-5.1.2/pseudo
>   temporary directory:   /home/ray/tmp
>   checking that needed directories and files exist... done
>
>   running pw.x as:  mpirun -v -np 32 /home/ray/espresso-5.1.2/bin/pw.x
> -nimage 4 -nk 4
>   running ph.x as:  mpirun -v -np 32 /home/ray/espresso-5.1.2/bin/ph.x
> -nimage 4 -nk 4
>
>   cleaning /home/ray/tmp... done
>   running the scf calculation... done
>   running the response calculation...Exit code -3 signaled from master
> Killing remote processes...[17] [MPI Abort by user] Aborting Program!
> [16] [MPI Abort by user] Aborting Program!
> Abort signaled by rank 17: MPI_Abort() code: 1, rank 17, MPI Abort by user
> Aborting program !
> [28] [MPI Abort by user] Aborting Program!
> MPI process terminated unexpectedly
> forrtl: error (69): process interrupted (SIGINT)
> --------------------------------------------------------------------------
>
> I've tried to tweak my environment variables, and have gotten slightly
> different errors, though all originate from mpirun. Using my current
> environment variables all pw.x examples run correctly.
>
> This seems a very fundamental error, so I think I am missing something
> quite basic. Thanks for your time,
>
> Ray Gasper
> Computational Nanomaterials Laboratory
> ELab 204
> Chemical Engineering
> University of Massachusetts Amherst
> 402-990-4900
>
>
> _______________________________________________
> Pw_forum mailing listPw_forum at pwscf.orghttp://pwscf.org/mailman/listinfo/pw_forum
>
>
> --
> Nicola Varini, PhD
>
> Scientific IT and Application Support (SCITAS)
> Theory and simulation of materials (THEOS)
> CE 0 813 (Bâtiment CE)
> Station 1
> CH-1015 Lausanne
> +41 21 69 31332 http://scitas.epfl.ch
>
> Nicola Varini
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20150929/31a316f4/attachment.html>


More information about the users mailing list