[Pw_forum] I/O problem on MPI?

Charles Chen polynmr at physics.unc.edu
Fri Mar 28 19:09:43 CET 2008

Dear PWSCF users

This is a problem originated from MPICH and LSF. Our SGI Altix does not
allow interactive job. I can submit job through bsub with -in option,
and I have successfully finished test running on si.scf.in, which is
extracted from the GIPAW example. Now when I continue to run the nmr
part. I extract the part &inputgipaw and saved that to si.nmr.in. But
there is some error message like this when I submitted the job:

  Number of processors in use:       4
      R & G space division:  proc/pool =    4

      from gipaw_readin : error #         1
      reading inputgipaw namelist


      stopping ..

This looks GIPAW is initiated, but it has problem in reading input file.

The MPI error message is like this:

MPI: On host cypress, Program
/netscr/polynmr/espresso4.0cvs2/GIPAW/gipaw.x, Rank 0, Process 4934 called
MPI_Abort(<communicator>, 0)

MPI: --------stack traceback-------
line: 2 Unable to parse input as legal command or C expression.
The "backtrace" command has failed because there is no running program.
MPI: Intel(R) Debugger for applications running on IA-64, Version 10.1-32
, Build 20070829

MPI: -----stack traceback ends-----
MPI: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize()
MPI: aborting job

And, after more testing, I found this problem is always there when the
code tries read some output from the other files. For example, in the
first example, the scf can be done successfully, however, the followed
bands calculation will result the same error.


Charles Chen

More information about the users mailing list