[Pw_forum] graphene workfunction

Gabriele Sclauzero sclauzer at sissa.it
Tue Apr 13 14:41:31 CEST 2010



Chad Junkermeier wrote:
>  After looking at the examples and what has been written in the forums 
> my best guess as to the general form of the file to run pp.x (before 
> running average.x) is given here.
> 
> 
> &inputPP
>    outdir='/workspace/junky/WFC_2/tmp',
>    plot_num=11
>    filplot = 'WFC_2.2.pot'
>   prefix='WFC_2'
> /
> &plot
>    iflag=3,
>    output_format=3,
>    nx=10,
>    ny=10,
>    nz =5,
> /
> 
> My problem is that this is taking a really long time to run; much, much 
> longer than the corresponding singlepoint SCF calculation ran using the 
> same number of processors.  At this point, the pp.x has been running for 
> 43 hours.  I can't believe that this is normal.  Here are the last 
> couple of lines from the pp.x output file:
> 
> 
>      Writing data to file  WFC_2.2.pot
>      Reading data from file  WFC_2.2.pot
> 
> 
> What is a normal length of time for this to run (I am running on 48 
> processors)?  Is my input for pp.x really screwed up?  Is pp.x just 
> sitting there eating up computer time and not computing anything?


Has the WFC_2.2.pot file been produced?
If so, maybe the problem is that the second part of the pp run (that corresponding to the 
&plot namelist) should be performed on a single processor and in the past it used to get 
stuck when it was run in parallel (maybe this has been fixed in the meanwhile...)

I suggest to first run pp.x in parallel leaving the &plot namelist blank, and then run 
again pp.x in serial specifying the plot part.


HTH

GS


> 
> Thank you for your help.
> 

-- 


o ------------------------------------------------ o
| Gabriele Sclauzero, PhD Student                  |
| c/o:   SISSA & CNR-INFM Democritos,              |
|        via Beirut 2-4, 34014 Trieste (Italy)     |
| email: sclauzer at sissa.it                         |
| phone: +39 040 3787 511                          |
| skype: gurlonotturno                             |
o ------------------------------------------------ o



More information about the users mailing list