<div dir="ltr"><div><pre>Good day Paolo,<br><br>Thanks for responding. dos.x allows parallelization. To prove my point and as a workaround to this problem which has bothering me for quite some time now, I ran it on the same cluster, but using 5.4.0:<br> (Although I also tried doing the single-thread-single-processor run as inspired by <a href="http://qe-forge.org/pipermail/pw_forum/2015-September/107940.html">http://qe-forge.org/pipermail/pw_forum/2015-September/107940.html</a>).<br><br><br> Program DOS v.5.4.0 starts on 12Nov2017 at 14:22:57 <br><br> This program is part of the open-source Quantum ESPRESSO suite<br> for quantum simulation of materials; please cite<br> "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);<br> URL <a href="http://www.quantum-espresso.org">http://www.quantum-espresso.org</a>", <br> in publications or presentations arising from this work. More details at<br> <a href="http://www.quantum-espresso.org/quote">http://www.quantum-espresso.org/quote</a><br><br> Parallel version (MPI & OpenMP), running on 12 processor cores<br> Number of MPI processes: 4<br> Threads/MPI process: 3<br> R & G space division: proc/nbgrp/npool/nimage = 4<br><br> Info: using nr1, nr2, nr3 values from input<br><br> Info: using nr1, nr2, nr3 values from input<br><br> IMPORTANT: XC functional enforced from input :<br> Exchange-correlation = PBE ( 1 4 3 4 0 0)<br> Any further DFT definition will be discarded<br> Please, verify this is what you really want<br><br> file C.pbe-n-kjpaw_psl.0.1.UPF: wavefunction(s) 2P renormalized<br> <br> Parallelization info<br> --------------------<br> sticks: dense smooth PW G-vecs: dense smooth PW<br> Min 268 108 37 5896 1489 284<br> Max 269 109 38 5899 1509 285<br> Sum 1075 433 151 23589 5985 1139<br> <br><br> Check: negative/imaginary core charge= -0.000004 0.000000<br><br> Gaussian broadening (default values): ngauss,degauss= 0 0.003675<br><br> <br> DOS : 1.05s CPU 0.82s WALL<br><br> <br> This run was terminated on: 14:22:57 12Nov2017 <br><br>=------------------------------------------------------------------------------=<br> JOB DONE.<br>=------------------------------------------------------------------------------=<br><br></pre><pre>But this workaround still does not solve the root cause of the problem. I would like to use the newest version of QE as much as possible, but this problem really gets on my nerves. Until the time it works properly, I might have to stick to older releases.<br></pre><br clear="all"><pre>Date: Fri, 10 Nov 2017 16:14:25 +0100
From: Paolo Giannozzi <<a href="mailto:p.giannozzi@gmail.com">p.giannozzi@gmail.com</a>>
Subject: Re: [Pw_forum] dos.x issues
To: PWSCF Forum <<a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a>>
Message-ID:
<CAPMgbCsUWmXuJUybE5m21dCEZ6BwBvqdu5Lz=<a href="mailto:ea2TN38EZ6xvA@mail.gmail.com">ea2TN38EZ6xvA@mail.gmail.com</a>>
Content-Type: text/plain; charset="utf-8"
I would first of all try to run it on a single processor. I don't think
dos.x is parallelized.
Paolo
On Fri, Nov 10, 2017 at 1:47 PM, Wilbert James Futalan <
<a href="mailto:wilbert.james.futalan@gmail.com">wilbert.james.futalan@gmail.com</a>> wrote:
> Hi, I'm having some trouble running dos.x in our cluster. I tried running the very same pw.x (scf and nscf) and dos.x in my own personal computer and they both seem to work just fine. Running it on the cluster however gives this error:
>
> Parallel version (MPI & OpenMP), running on 12 processor cores
> Number of MPI processes: 4
> Threads/MPI process: 3
> R & G space division: proc/nbgrp/npool/nimage = 4
>
> Info: using nr1, nr2, nr3 values from input
>
> Info: using nr1, nr2, nr3 values from input
> rank 0 in job 1 piv01_49932 caused collective abort of all ranks
> exit status of rank 0: return code 174
>
> What do you think could be wrong. I'm using QE 6.1 for the cluster but 5.4 for my computer.
>
> Thanks.
>
>
>
> --
> Wilbert James C. Futalan
> Research Fellow I
> Laboratory of Electrochemical Engineering
> Department of Chemical Engineering
> University of the Philippines - Diliman
>
> _______________________________________________
> Pw_forum mailing list
> <a href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a>
> <a href="http://pwscf.org/mailman/listinfo/pw_forum">http://pwscf.org/mailman/listinfo/pw_forum</a>
>
--
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222</pre></div><div><div><div><div><br>-- <br><div class="gmail_signature"><div dir="ltr"><font size="1">Engr. Wilbert James C. Futalan<br>Research Fellow I<br>Laboratory of Electrochemical Engineering</font><div><font size="1">Department of Chemical Engineering</font></div><div><font size="1">University of the Philippines - Diliman</font></div></div></div>
</div></div></div></div></div>