[Pw_forum] Segmentation fault

Gabriele Sclauzero sclauzer at sissa.it
Mon Aug 10 12:19:51 CEST 2009


Hi,

leila salimi wrote:
> Hi every body
> I use espresso-4.0.5 on IBM pSeries 575, a clustered SMP (Symmetric Multiprocessing) system. 
> If I use calculation='scf' no problem and the result matches our testdata.
> If we use calculation='relax’ we get the right value for the energy after the first loop and then the application should start with the second loop but it stops with the following error:
> 
> ======
>      energy   new            =    -121.3312114461 Ry
> 
>      new trust radius        =        .5000000000 bohr
> ERROR: 0031-250  task 0: Segmentation fault
> ERROR: 0031-250  task 4: Terminated
> ERROR: 0031-250  task 7: Terminated
> ERROR: 0031-250  task 6: Terminated
> ERROR: 0031-250  task 2: Terminated
> ERROR: 0031-250  task 1: Terminated
> ERROR: 0031-250  task 5: Terminated
> ERROR: 0031-250  task 3: Terminated
> 
> We use the following command line:
> 
> poe ~/espresso/bin/pw.x -procs 8 -npool 2 < input.relax  > InAs-relax.out 2>&1
> 
> 
> Do you have any idea with this problem?

Is the error reproducible? If so, you may have filled the physical memory of your node. I 
think that at the beginning of the second scf cycle the program needs to allocate more 
memory to mix old/new variables (wfcs, charge density...). Try to monitor memory usage and 
   ,if this is the problem, use more computing nodes.

regards

GS

> 
> Best Regards,
> Leila Salimi
> Isfahan University of Technology, Isfahan, Iran.
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum

-- 


o ------------------------------------------------ o
| Gabriele Sclauzero, PhD Student                  |
| c/o:   SISSA & CNR-INFM Democritos,              |
|        via Beirut 2-4, 34014 Trieste (Italy)     |
| email: sclauzer at sissa.it                         |
| phone: +39 040 3787 511                          |
| skype: gurlonotturno                             |
o ------------------------------------------------ o



More information about the users mailing list