[Pw_forum] dos.x issues

Wilbert James Futalan wilbert.james.futalan at gmail.com
Mon Nov 13 03:36:01 CET 2017


>
> input that can be run in a reasonable time and an exact description of
> compilation options (serial, parallel MPI, parallel MPI+OpenMP) and kind of
> execution (number of processors etc.)


Good day Paolo,

Okay, so here are the input files for the tests I did on parallel
MPI+OpenMP. Please let me know if you need anything else.

Parallel version (MPI & OpenMP), running on      12 processor cores
     Number of MPI processes:                 4
     Threads/MPI process:                     3


*scf.in <http://scf.in>*

&CONTROL
>   calculation='scf',
>   outdir='silicon',
>   prefix='calc',
>   pseudo_dir='.',
>   verbosity='low',
>   tstress=.false.,
>   tprnfor=.false.,
> /
> &SYSTEM
>   ibrav=2,
>   celldm(1)=10.2623466921d0,
>   nat=2,
>   ntyp=1,
>   ecutwfc=30.0d0,
>   ecutrho=300.0d0,
>   nbnd=10,
>   input_dft='PBE',
> /
> &ELECTRONS
>   diagonalization='david',
>   conv_thr=1d-08,
>   mixing_mode='plain',
>   mixing_beta=0.700d0,
> /
>
> ATOMIC_SPECIES
>   Si 28.085500d0 Si.pbe-n-rrkjus_psl.0.1.UPF
>
> ATOMIC_POSITIONS {alat}
>   Si   0.0000000000d0   0.0000000000d0   0.0000000000d0
>   Si   0.2500000000d0   0.2500000000d0   0.2500000000d0
>
> K_POINTS {automatic}
>   9 9 9 0 0 0
>
>
* nscf.in <http://nscf.in>*

&CONTROL
>   calculation='nscf',
>   outdir='silicon',
>   prefix='calc',
>   pseudo_dir='.',
>   verbosity='low',
>   tstress=.false.,
>   tprnfor=.false.,
> /
> &SYSTEM
>   ibrav=2,
>   celldm(1)=10.2623466921d0,
>   nat=2,
>   ntyp=1,
>   ecutwfc=30.0d0,
>   ecutrho=300.0d0,
>   nbnd=10,
>   input_dft='PBE',
>   occupations='tetrahedra',
> /
> &ELECTRONS
>   diagonalization='david',
>   conv_thr=1d-08,
>   mixing_mode='plain',
>   mixing_beta=0.700d0,
> /
>
> ATOMIC_SPECIES
>   Si 28.085500d0 Si.pbe-n-rrkjus_psl.0.1.UPF
>
> ATOMIC_POSITIONS {alat}
>   Si   0.0000000000d0   0.0000000000d0   0.0000000000d0
>   Si   0.2500000000d0   0.2500000000d0   0.2500000000d0
>
> K_POINTS {automatic}
>   32 32 32 0 0 0
>
>
*dos.in <http://dos.in>*

&DOS
>   outdir='silicon',
>   prefix='calc',
>   Emin=-10,
>   Emax=20,
>   DeltaE=0.05,
>   fildos='silicon.dos',
>
> /
>
> Best regards,
James


2017-11-13 6:24 GMT+09:00 Paolo Giannozzi <p.giannozzi at gmail.com>:

>
> On Sun, Nov 12, 2017 at 6:41 AM, Wilbert James Futalan <
> wilbert.james.futalan at gmail.com> wrote:
>
> this problem really gets on my nerves
>>
>> reports of problems without needed information get on MY nerves. Please
> provide an input that can be run in a reasonable time and an exact
> description of compilation options (serial, parallel MPI, parallel
> MPI+OpenMP) and kind of execution (number of processors etc.) that lead to
> such problem
>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> <https://maps.google.com/?q=Udine,+via+delle+Scienze+208,+33100+Udine,+Italy&entry=gmail&source=g>
> Phone +39-0432-558216, fax +39-0432-558222
>
>


-- 
Engr. Wilbert James C. Futalan
Research Fellow I
Laboratory of Electrochemical Engineering
Department of Chemical Engineering
University of the Philippines - Diliman
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quantum-espresso.org/pipermail/users/attachments/20171113/dee0a106/attachment.html>


More information about the users mailing list