<div dir="ltr"><div><div><div><div><div>Hi Paolo,<br><br></div>I noticed yesterday that the wf_collect is set true as default. Probably you already had a lot of discussion on your side but I have a few questions.<br><br></div>Do we have confident that wf_collect will not add significant of time on large simulations?<br></div>Is the performance good on both lustre and GPFS file system? I didn't have much experience with the recent added hdf5 feature. Does the WF collect use parallel collective I/O? or like the old fasion collect the WF on the master and write by it. Is the performance good? Measured bandwidth?<br><br></div><div>On the machines I use, the GPFS has 8 aggregators by default and PIO performance is better than creating individual files. The lustre does the opposite and has only 1 OST by default and thus write sequentially with PIO. Writing individual files becomes faster. Of course you can tune both of them, just very tricky.<br></div><div><br></div><div>Do QE still create the file per MPI rank from the beginning? 4k empty file is a bit slow to create and pain to 'ls'. When I do DFT+U basically the number of files doubles or triples I don't remember exactly.<br></div><div><br></div>PS: In the past, I had the experience that QE was not able to ready its own collected WF when the record (using IOTK) is very large >100GB. Not collecting the WF was the preferred way for me. It should not be a problem with hdf5 since the dataset is per band and much smaller.<br><br>Thanks,<br>Ye<br clear="all"></div><div><div><div><div><div><div><div><div><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">===================<br>
Ye Luo, Ph.D.<br>
Leadership Computing Facility<br>
Argonne National Laboratory</div></div></div>
</div></div></div></div></div></div></div></div></div>