[Pw_forum] Problem in running nscf calculation at higher number of k points

Gabriele Sclauzero sclauzer at sissa.it
Fri Nov 13 08:37:40 CET 2009

Dimpy Sharma wrote:
> Hi Quantum espresso users,
> I need to calculate projected density of states and band structure for 
> my system and thats why I have been running a nsf calculation with 
> higher number of k points, but the problem is that my calculation 
> crashes without giving me any error message. The nscf calculation runs 
> successfully upto for  4 k points but if I increase it to 5 or higher 
> number it crashes. 

Exceeded walltime?

I run my calculation in a very high memory of 80
> nodes. Can anybody please give me any suggestion?

If you are using 80 processors, either you have a very big system and you don't need a 
high number of k-points (however I wouldn't define 4-5 a high number), or you are using a 
lot of pools to distribute k-points.

Before posting again, you should check if you meet the same problem on a smaller system or 
even in a serial run.
Also check the logs (stderr and stdout) from the nodes where the job was running (which 
are usually redirected to the user at the end of a parallel job): if the program crashed 
there should be some (possibly obscure) error message.
If you really want help please specify all the details of the calculation, where you run 
the job, which system, etc. etc.


> Thanks a million!
> Dimpy
> D Sharma
> Theory Modelling and design
> Ireland
> ------------------------------------------------------------------------
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum


o ------------------------------------------------ o
| Gabriele Sclauzero, PhD Student                  |
| c/o:   SISSA & CNR-INFM Democritos,              |
|        via Beirut 2-4, 34014 Trieste (Italy)     |
| email: sclauzer at sissa.it                         |
| phone: +39 040 3787 511                          |
| skype: gurlonotturno                             |
o ------------------------------------------------ o

More information about the users mailing list