[Pw_forum] EXX files I/O

Tyler Grassman tgrassma at ucsd.edu
Mon Aug 20 04:10:29 CEST 2007


Thank you very much for the response.  I was only able to get everything to
work right by putting iunexx and exx_nwordwfc into Modules/io_files.f90 and
adding/removing the appropriate USE and INTEGER declarations in exx.f90 and
openfil.f90 (as well as removing "iunexx = find_free_unit" and the diropn
call from exx.f90).  Results-wise...  It doesn't really do much for the CPU
time (as expected), but brings the wall times down to almost equal to the
CPU times, with the biggest effect being for smaller #procs (where the .exx#
files are quite large).  I'm sure the effect will be even bigger with larger
system sizes.  So, thanks again!

Regards,

Tyler Grassman
Materials Science and Engineering
University of California, San Diego

>Thank'you for noticing that... it will be changed.
>a simple temporary fix  is to move the opening of the exx files into 
>openfil.f90
>
>#ifdef EXX
>     exx_nwordwfc=2*nrxxs
>     iunexx = find_free_unit()
>     call diropn(iunexx,'exx', exx_nwordwfc, exst)
>#endif
>
>in this way they will be open in the same directory as wfc## files.
>Thank you,
>
>stefano de Gironcoli,
>SISSA and DEMOCRITOS
>
>On Sat, 18 Aug 2007, Tyler Grassman wrote:
>
>> Hello.  I've been working with the EXX implementation in PWscf for a
little
>> bit now (just doing testing on bulk Si for the time being, getting to
know
>> the code a little bit, as I'm new to PWscf).  I'm working on figuring out
>> the optimal parallelization and disk I/O parameters.  My cluster uses an
NSF
>> file system to share the /home directory (I know, everyone says not to
with
>> PWscf, but I don't see that I've much other choice).  However, the
compute
>> nodes have large scratch disks, so setting the wfcdir tag to use the
local
>> node scratch disks helps reduce the NFS disk I/O.  However, I find that
the
>> prefix.exx# files are still being written to the outdir, and they tend to
be
>> pretty big (each file is tens to hundreds of MBs, depending on the number
of
>> procs the job is being run on, even for this small bulk system).  It
looks
>> like these files get rewritten every hybrid refinement loop (i.e. "NOW GO
>> BACK TO REFINE HYBRID CALCULATION").  I'm guessing these files will get
>> considerably larger once the system gets larger, too, in which case this
I/O
>> will really end up costing a lot in the wall time (and I plan on trying
some
>> considerably larger systems in the near future), yes?  I looked in
input.f90
>> source file and didn't see anything about it, but I was wondering if
there
>> was any way (or a plan to implement such a thing, if possible) to treat
>> these files in the same way (some equivalent to wfcdir)?  Or if not, does
>> anyone have any suggestions?
>>
>> Thanks,
>>
>> Tyler Grassman
>> Materials Science and Engineering
>> University of California, San Diego
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://www.democritos.it/mailman/listinfo/pw_forum
>>




More information about the users mailing list