<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br clear="all"><div><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><span style="font-size:12.8px">Dear Prof. Paolo,</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">I am also having same error on same cluster which Mr. A. Kar using with QE_6.3 and "composer_xe_2015".</span></div><div><span style="font-size:12.8px">I am invoking the command "mpirun -np 32 $QE/pw.x -npool 2.... *in ... *out for vc-relax.<br></span></div><div><span style="font-size:12.8px">For me </span>it happen in the last scf, after a message "End final coordinates".</div><div>It went okay till scf cycle 53.</div><div>It happens in some cases only. <br></div><div><br></div><div>Below seems strand for me: (grep scf *out)<br></div><div><br></div><div> A final scf calculation at the relaxed structure.<br> estimated scf accuracy < 0.35117344 Ry<br> estimated scf accuracy < 32.50696577 Ry<br> estimated scf accuracy < 0.15737732 Ry<br> estimated scf accuracy < 0.08941433 Ry<br> estimated scf accuracy < 0.04130004 Ry<br> estimated scf accuracy < 0.03367160 Ry<br> estimated scf accuracy < 0.01518230 Ry<br> estimated scf accuracy < 0.01241863 Ry<br> estimated scf accuracy < 0.00237884 Ry<br> estimated scf accuracy < 0.02059544 Ry<br> estimated scf accuracy < 0.01326459 Ry<br> estimated scf accuracy < 0.02823102 Ry<br> estimated scf accuracy < 0.14757734 Ry<br> estimated scf accuracy < 0.05952341 Ry<br> estimated scf accuracy < 0.05929330 Ry<br> estimated scf accuracy < 0.00738492 Ry<br> estimated scf accuracy < 0.00555608 Ry<br> estimated scf accuracy < 0.00535798 Ry<br> estimated scf accuracy < 0.00688829 Ry<br> estimated scf accuracy < 0.00169491 Ry<br> estimated scf accuracy < 0.00067948 Ry<br> estimated scf accuracy < 0.06297539 Ry<br> estimated scf accuracy < 0.05981938 Ry<br> estimated scf accuracy < 0.05814955 Ry<br> estimated scf accuracy < 0.06202781 Ry<br> estimated scf accuracy < 0.06178400 Ry<br> estimated scf accuracy < 0.05837077 Ry<br> estimated scf accuracy < 0.07705482 Ry<br> estimated scf accuracy < 0.03734422 Ry<br> estimated scf accuracy < 0.01355381 Ry<br> estimated scf accuracy < 0.02777065 Ry<br> estimated scf accuracy < 0.02632385 Ry<br> estimated scf accuracy < 0.02607268 Ry<br> estimated scf accuracy < 0.01001318 Ry<br> estimated scf accuracy < 0.00563499 Ry<br> estimated scf accuracy < 0.00005798 Ry<br> estimated scf accuracy < 0.00066655 Ry<br> estimated scf accuracy < 0.00000103 Ry<br> estimated scf accuracy < 0.00000082 Ry<br> estimated scf accuracy < 0.00000002 Ry<br> estimated scf accuracy < 0.00000001 Ry<br> estimated scf accuracy < 6.0E-11 Ry<br><span style="color:rgb(0,0,255)"> estimated scf accuracy < 0.00007832 Ry<br> estimated scf accuracy < 0.00007830 Ry<br> estimated scf accuracy < 0.00007712 Ry<br> estimated scf accuracy < 0.00007558 Ry<br> estimated scf accuracy < 0.00007271 Ry<br> estimated scf accuracy < 0.00006239 Ry<br> estimated scf accuracy < 0.00004313 Ry<br> estimated scf accuracy < 0.00002601 Ry</span><br><br></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"></span></div><div><span style="font-size:12.8px"> I still have double on my PPs. If this is due to something else, please let me know what additional information I can supply to reproduce the error message.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Kind regards</span></div><div><span style="font-size:12.8px">Bhamu<br></span></div><div dir="ltr"><span style="font-size:12.8px"><br></span></div>CSIR-NCL, Pune</div><div>India<br></div></div></div></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Oct 22, 2018 at 11:19 AM Paolo Giannozzi <<a href="mailto:p.giannozzi@gmail.com">p.giannozzi@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">QE version?<br></div><br><div class="gmail_quote"><div dir="ltr">On Sat, Oct 20, 2018 at 9:01 AM arini kar <<a href="mailto:arini.kar@gmail.com" target="_blank">arini.kar@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Dear Quantum Espresso users,</div><div><br></div><div>I have been trying to relax a 2x2x1 supercell of hematite doped with Ge and an oxygen vacancy. However, after a few electronic iterations, I received the following error:</div><div><br></div><div>Fatal error in PMPI_Bcast: Other MPI error, error stack:<br>PMPI_Bcast(2112)........: MPI_Bcast(buf=0x11806c00, count=7500, MPI_DOUBLE_PRECISION, root=0, comm=0x84000005) failed<br>MPIR_Bcast_impl(1670)...: <br>I_MPIR_Bcast_intra(1887): Failure during collective<br>MPIR_Bcast_intra(1524)..: Failure during collective<br>Fatal error in PMPI_Bcast: Other MPI error, error stack:<br>PMPI_Bcast(2112)........: MPI_Bcast(buf=0x56fced0, count=7500, MPI_DOUBLE_PRECISION, root=0, comm=0x84000005) failed<br>MPIR_Bcast_impl(1670)...: <br>I_MPIR_Bcast_intra(1887): Failure during collective<br>MPIR_Bcast_intra(1524)..: Failure during collective<br>Fatal error in PMPI_Bcast: Other MPI error, error stack:<br>PMPI_Bcast(2112)........: MPI_Bcast(buf=0x1080d330, count=7500, MPI_DOUBLE_PRECISION, root=0, comm=0x84000005) failed<br>MPIR_Bcast_impl(1670)...: <br>I_MPIR_Bcast_intra(1887): Failure during collective<br>MPIR_Bcast_intra(1524)..: Failure during collective<br>Fatal error in PMPI_Bcast: Other MPI error, error stack:<br>PMPI_Bcast(2112)........: MPI_Bcast(buf=0x10c025c0, count=7500, MPI_DOUBLE_PRECISION, root=0, comm=0x84000005) failed<br>MPIR_Bcast_impl(1670)...: <br>I_MPIR_Bcast_intra(1887): Failure during collective<br>MPIR_Bcast_intra(1524)..: Failure during collective<br>Fatal error in PMPI_Bcast: Other MPI error, error stack:<br>PMPI_Bcast(2112)........: MPI_Bcast(buf=0x112e96c0, count=7500, MPI_DOUBLE_PRECISION, root=0, comm=0x84000005) failed<br>MPIR_Bcast_impl(1670)...: <br>I_MPIR_Bcast_intra(1887): Failure during collective<br>MPIR_Bcast_intra(1524)..: Failure during collective<br>[16:ycn213.en.yuva.param] unexpected disconnect completion event from [7:ycn217.en.yuva.param]<br>Assertion failed in file ../../dapl_conn_rc.c at line 1128: 0<br>internal ABORT - process 16</div><div><br></div><div>The input file for the same is attached below. Since, I am new to Quantum Espresso, I am not able to find a solution to the problem. I request you to help me with possible corrections.</div><div><br></div><div>Regards</div><div>Arini Kar</div><div>M.Sc.-Ph.D.</div><div>IIT Bombay</div><div>India<br></div><div><br></div></div></div>
_______________________________________________<br>
users mailing list<br>
<a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail-m_-9092074357802596639gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,<br>Univ. Udine, via delle Scienze 208, 33100 Udine, Italy<br>Phone +39-0432-558216, fax +39-0432-558222<br><br></div></div></div></div></div>
_______________________________________________<br>
users mailing list<br>
<a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a></blockquote></div></div></div></div></div></div>