[QE-developers] bug in XSpectra

Meng, Fanchen fmeng1 at bnl.gov
Thu Jan 13 02:22:46 CET 2022


Dear QE developers,

I am a postdoc researcher working at the Brookhaven National Lab under the supervision of Dr. Deyu Lu. My current research project involves benchmark calculations of Ti K-edge XANES using the XSpectra code (QE 6.5 and 6.6), which was compiled on our local cluster using intel/psxe suite, mvapich2 and intel mkl library.

I believe that I found a bug in XSpectra, which could cause inconsistent results using different k point parallelization for xspectra.x. Below is an example based on a super cell of rutile TiO2. The calculation was done in two sequential steps: the first one is a scf calculation using only 1 k point at Gamma (1 1 1 0 0 0), and the second one is an xspectra calculation using 3x3x3 unshifted mesh. As shown in the Figure below, the polarization averaged spectra are different, depending on the choice of the k point parallelization in the second step (nk 1; nk 2; nk 12) using QE 6.6. Note that in QE 6.5, the code will crash when I used k point parallelization for xspectra.x, as no k point parallelization was used in the scf calculation. The effect is better demonstrated using a neutral potential for the absorber (i.e., independent particle calculation under the initial state rule), while the effect is less severe when running the calculation using the core hole PAW potential. The input files used for the calculations are attached in this post for reproducing our results.
[Chart  Description automatically generated with medium confidence]

Figure 1. Ti K-edge XANES of rutile TiO2 calculated using neutral potential with different k point parallelization schemes.
Clearly the spectra shouldn't change using different nk values. After I examined the XSpectra code, I figured out the issue that is related to the Fermi energy. After XSpectra reads the Fermi energy from scf calculation, it is reset as the HOMO for semiconductor using subroutine get_homo_lumo in PW/src/print_ks_energies.f90. This subroutine, however, only store the Ef=HOMO for the ionode, while the Fermi energies on other cores are unchanged. The Fermi energy was not broadcasted among different pools. Therefore, it will raise an issue if running XSpectra using different pools in k point parallelization as the other pools have a different Fermi energy compared to the pool containing the ionode.

To fix this issue, I have modified the xspectra.f90 source code, and inserted a line CALL mp_bcast(ef, ionode_id, world_comm) right after the Fermi energy is redetermined at subroutine calculate_and_write_homo_lumo_to_stdout(ehomo,elumo). I also attached the modified xspectra.f90 file compatible for QE 7.0 in this post. If this issue and bug fix are confirmed, shall I submit request to commit this change to the QE master branch?

Another concern is in file plot_xanes_cross_sections.f90. When gathering the results from different pool, subroutine mp_sum (Intensity_coord, inter_pool_comm) is called. I am wondering if we need a mp_barrier before this call to synchronize all the pools to perform the mp_sum?

Thanks,
Fanchen

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/developers/attachments/20220113/9aff551d/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 60018 bytes
Desc: image001.png
URL: <http://lists.quantum-espresso.org/pipermail/developers/attachments/20220113/9aff551d/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: xspectra.f90
Type: application/octet-stream
Size: 26011 bytes
Desc: xspectra.f90
URL: <http://lists.quantum-espresso.org/pipermail/developers/attachments/20220113/9aff551d/attachment-0001.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: input_files.zip
Type: application/x-zip-compressed
Size: 1306556 bytes
Desc: input_files.zip
URL: <http://lists.quantum-espresso.org/pipermail/developers/attachments/20220113/9aff551d/attachment-0001.bin>


More information about the developers mailing list