<div dir="ltr">Sorry for the confusion. Unfortunately AVOGADRO has no option to build the input for QE making a first attempt difficult. I am ORCA user and I suspect a bug there with my molecules ("bond" breaking while OPT). Therefore I wanted to try OPT+FREQ on another code.<br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Sep 3, 2020 at 5:55 PM Giuseppe Mattioli <<a href="mailto:giuseppe.mattioli@ism.cnr.it">giuseppe.mattioli@ism.cnr.it</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
It looks like you're using the wrong code... :-D<br>
<br>
Your input does not resemble at all a QE input.<br>
HTH<br>
Giuseppe<br>
<br>
Quoting Francesco Pietra <<a href="mailto:chiendarret@gmail.com" target="_blank">chiendarret@gmail.com</a>>:<br>
<br>
> New to QE I am trying a geom optimization at low level.<br>
><br>
> INPUT<br>
> $rem<br>
> JOBTYPE Opt<br>
> EXCHANGE M062X<br>
> BASIS 3-21G<br>
> GUI=2<br>
> $end<br>
><br>
> $comment<br>
> Title<br>
> $end<br>
><br>
> $molecule<br>
> 1 1<br>
> C 1.90540 0.47370 -0.07550<br>
> ....<br>
> ............<br>
> H 2.25410 -1.41240 -0.86170<br>
> $end<br>
><br>
> JOB (SLURM, trying to run on all 36 cores of the single node)<br>
> #!/bin/bash<br>
> #SBATCH --time=00:30:00 # Walltime in hh:mm:ss<br>
> #SBATCH --nodes=1 # Number of nodes<br>
> #SBATCH --ntasks-per-node=36 # Number of MPI ranks per node<br>
> #SBATCH --cpus-per-task=1 # Number of OpenMP threads for each MPI<br>
> process/rank<br>
> #SBATCH --mem=118000 # Per nodes memory request (MB)<br>
> #SBATCH --account=xxxx<br>
> #SBATCH --job-name=9-opt<br>
> #SBATCH --output 9-opt.out<br>
> #SBATCH --error 9-opt.err<br>
> #SBATCH --partition=gll_usr_prod<br>
><br>
> module purge<br>
> module load profile/phys<br>
> module load autoload qe/6.3<br>
><br>
> export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK<br>
> export MKL_NUM_THREADS=${OMP_NUM_THREADS}<br>
><br>
> mpirun pw.x -npool 1 -input 9-opt.qcin > 9-opt.log<br>
><br>
> ERROR<br>
> ......... Parallel version (MPI & OpenMP), running on 36 processor<br>
> cores<br>
> Number of MPI processes: 36<br>
> Threads/MPI process: 1<br>
><br>
> MPI processes distributed on 1 nodes<br>
> R & G space division: proc/nbgrp/npool/nimage = 36<br>
> Reading input from 9-opt.qcin<br>
><br>
> Error in routine read_namelists (2):<br>
> could not find namelist &control<br>
><br>
><br>
> Thanks for advice<br>
<br>
<br>
<br>
GIUSEPPE MATTIOLI<br>
CNR - ISTITUTO DI STRUTTURA DELLA MATERIA<br>
Via Salaria Km 29,300 - C.P. 10<br>
I-00015 - Monterotondo Scalo (RM)<br>
Mob (*preferred*) +39 373 7305625<br>
Tel + 39 06 90672342 - Fax +39 06 90672316<br>
E-mail: <<a href="mailto:giuseppe.mattioli@ism.cnr.it" target="_blank">giuseppe.mattioli@ism.cnr.it</a>><br>
<br>
_______________________________________________<br>
Quantum ESPRESSO is supported by MaX (<a href="http://www.max-centre.eu/quantum-espresso" rel="noreferrer" target="_blank">www.max-centre.eu/quantum-espresso</a>)<br>
users mailing list <a href="mailto:users@lists.quantum-espresso.org" target="_blank">users@lists.quantum-espresso.org</a><br>
<a href="https://lists.quantum-espresso.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.quantum-espresso.org/mailman/listinfo/users</a><br>
</blockquote></div>