[Pw_forum] 2Gb file limit in PWSCF
francesco.antoniella at aquila.infn.it
Mon Jan 19 23:43:25 CET 2004
Il lun, 2004-01-19 alle 19:36, Konstantin Kudin ha scritto:
> I tried to run a job on a 32-bit Linux, and once the
> *.wfc file became larger than 2Gb, the job crashed due
> to the file size restrictions. Not good :-)
> How doable would it be to have each k point written
> out in a seperate *.wfc file? The parallel code does
> exactly that when it splits things across nodes, so
> possibly most things are in place.
I think that you can foul the system by using anyway mpi on a single
If you have a "standard" red hat distro with LAM-MPI installed
you can lamboot the mpi daemon with a lamnodes like this
so the LAM daemon will start 4 processes on the same node, but beware
the memory: if you want to scale the files you must use -pools so you
will multiplicate the raw memory occupation.
P.S.:Wath are you doing with this huge wfc file? A thousand k points on
> Do you Yahoo!?
> Yahoo! Hotjobs: Enter the "Signing Bonus" Sweepstakes
> Pw_forum mailing list
> Pw_forum at pwscf.org
More information about the users