<br><br>
<div class="gmail_quote">On Tue, Apr 5, 2011 at 5:35 PM, <span dir="ltr"><<a href="mailto:pw_forum-request@pwscf.org">pw_forum-request@pwscf.org</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">Send Pw_forum mailing list submissions to<br> <a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a><br>
<br>To subscribe or unsubscribe via the World Wide Web, visit<br> <a href="http://www.democritos.it/mailman/listinfo/pw_forum" target="_blank">http://www.democritos.it/mailman/listinfo/pw_forum</a><br>or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:pw_forum-request@pwscf.org">pw_forum-request@pwscf.org</a><br><br>You can reach the person managing the list at<br> <a href="mailto:pw_forum-owner@pwscf.org">pw_forum-owner@pwscf.org</a><br><br>
When replying, please edit your Subject line so it is more specific<br>than "Re: Contents of Pw_forum digest..."<br><br><br>Today's Topics:<br><br> 1. Problems with electrostatic corrections (Makov-Payne or<br>
density counter charge) for aperiodic systems in 4.2 version<br> (Eduardo Ariel Menendez Proupin)<br> 2. Re: Pw_forum Digest, Vol 46, Issue 12 (Stefano de Gironcoli)<br> 3. Re: Problems with electrostatic corrections (Makov-Payne or<br>
density counter charge) for aperiodic systems in 4.2 version<br> (Paolo Giannozzi)<br> 4. Re: Problems with electrostatic corrections (Makov-Payne or<br> density counter charge) for aperiodic systems in 4.2 version<br>
(Oliviero Andreussi)<br> 5. Nonlinear scaling with pool parallelization (Markus Meinert)<br> 6. Re: Nonlinear scaling with pool parallelization (Paolo Giannozzi)<br> 7. Nonlinear scaling with pool parallelization (Markus Meinert)<br>
8. Re: Nonlinear scaling with pool parallelization (Paolo Giannozzi)<br> 9. Generating ultra soft pseudopotential (Tram Bui)<br> 10. Re: Generating ultra soft pseudopotential (Duy Le)<br><br><br>----------------------------------------------------------------------<br>
<br>Message: 1<br>Date: Tue, 5 Apr 2011 10:44:10 -0300<br>From: Eduardo Ariel Menendez Proupin <<a href="mailto:eariel99@gmail.com">eariel99@gmail.com</a>><br>Subject: [Pw_forum] Problems with electrostatic corrections<br>
(Makov-Payne or density counter charge) for aperiodic systems in 4.2<br> version<br>To: <a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a><br>Message-ID: <<a href="mailto:BANLkTinTEAmVsonPUjmefjCuQ3xfi0hZ3w@mail.gmail.com">BANLkTinTEAmVsonPUjmefjCuQ3xfi0hZ3w@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br><br>2) Why the DCC correction is disabled in version 4.2? Indeed when setting<br>asume_isolated = 'dcc' and &EE input parameters to their default values, I<br>
obtain an error message "DCC correction is disabled". Looking in the<br>PW/input.f90 subroutine, one found that the dcc correction is disabled by an<br>immediate call to errore subroutine.<br><br>I guess some bug was discovered ans is not fixed yet.<br>
I take the occasion to ask if the DCC correction has been or will be<br>available for slab calculations.<br>--<br><br><br>Eduardo Menendez<br>Departamento de Fisica<br>Facultad de Ciencias<br>Universidad de Chile<br>Phone: (56)(2)9787439<br>
URL: <a href="http://fisica.ciencias.uchile.cl/~emenendez" target="_blank">http://fisica.ciencias.uchile.cl/~emenendez</a><br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <a href="http://www.democritos.it/pipermail/pw_forum/attachments/20110405/a96b1f24/attachment-0001.htm" target="_blank">http://www.democritos.it/pipermail/pw_forum/attachments/20110405/a96b1f24/attachment-0001.htm</a><br>
<br>------------------------------<br><br>Message: 2<br>Date: Tue, 05 Apr 2011 15:46:12 +0200<br>From: Stefano de Gironcoli <<a href="mailto:degironc@sissa.it">degironc@sissa.it</a>><br>Subject: Re: [Pw_forum] Pw_forum Digest, Vol 46, Issue 12<br>
To: <a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a><br>Message-ID: <<a href="mailto:20110405154612.xtjk8ix28ksw4cww@webmail.sissa.it">20110405154612.xtjk8ix28ksw4cww@webmail.sissa.it</a>><br>Content-Type: text/plain; charset=ISO-8859-1; DelSp="Yes";<br>
format="flowed"<br><br>Dear bahaareh tavakoli nejad,<br><br> please make an effort to<br> 1) provide a meaningful subject line<br> 2) not reply including a full Digest email full of junk<br> 3) provide your affiliation<br>
<br> In order to maximize your chance of getting an answer you should<br>make it understandable and possibly sopecific.<br><br> HTH<br><br> stefano<br><br>Quoting bahaareh tavakoli nejad <<a href="mailto:bahaartv@gmail.com">bahaartv@gmail.com</a>>:<br>
.... AFTER TONS OF JUNK<br>><br>> dear Gabriele<br>> Ok!in this stage my question is about pseudopotentials in general no<br>> software.<br>> I guessed maybe one has information about various parameters in<br>
> pseudopotentials and know what are applications different kinds of them or<br>> recommend me refrence.my question was'nt only Teter pseudopotential and are<br>> <a href="http://general.it/" target="_blank">general.it</a> is irrational?<br>
><br><br><br><br>----------------------------------------------------------------<br> SISSA Webmail <a href="https://webmail.sissa.it/" target="_blank">https://webmail.sissa.it/</a><br> Powered by Horde <a href="http://www.horde.org/" target="_blank">http://www.horde.org/</a><br>
<br><br><br><br>------------------------------<br><br>Message: 3<br>Date: Tue, 05 Apr 2011 16:35:15 +0200<br>From: Paolo Giannozzi <<a href="mailto:giannozz@democritos.it">giannozz@democritos.it</a>><br>Subject: Re: [Pw_forum] Problems with electrostatic corrections<br>
(Makov-Payne or density counter charge) for aperiodic systems in 4.2<br> version<br>To: PWSCF Forum <<a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a>><br>Message-ID: <<a href="mailto:1302014115.6852.3.camel@fe12lx.fisica.uniud.it">1302014115.6852.3.camel@fe12lx.fisica.uniud.it</a>><br>
Content-Type: text/plain<br><br>On Tue, 2011-04-05 at 10:44 -0300, Eduardo Ariel Menendez Proupin wrote:<br><br>> I guess some bug was discovered ans is not fixed yet.<br><br>"bit-rotting" : if you leave a piece of code unattended, it will cease<br>
to work for no apparent reason.<br><br>P.<br>--<br>Paolo Giannozzi, IOM-Democritos and University of Udine, Italy<br><br><br><br><br>------------------------------<br><br>Message: 4<br>Date: Tue, 05 Apr 2011 15:37:59 +0100<br>
From: Oliviero Andreussi <<a href="mailto:oliviero@MIT.EDU">oliviero@MIT.EDU</a>><br>Subject: Re: [Pw_forum] Problems with electrostatic corrections<br> (Makov-Payne or density counter charge) for aperiodic systems in 4.2<br>
version<br>To: PWSCF Forum <<a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a>><br>Message-ID: <<a href="mailto:4D9B2947.6020609@mit.edu">4D9B2947.6020609@mit.edu</a>><br>Content-Type: text/plain; charset=ISO-8859-1; format=flowed<br>
<br><br><br>1) Roughly speaking, eq. 15 of the original Makov-Payne paper express<br>the energy of periodic system (E) in terms of the energy of the isolated<br>system (E0) plus the energy of interaction between the periodic replicas<br>
(E11 and E12). Neglecting the fact that the E12 term has the wrong sing<br>in the original equation (as it is stressed in a comment in the code),<br>to obtain the energy of the isolated system you want to subtract E11 and<br>
E12 from your periodic total energy, as it is done in the code, i.e.<br>E0=E-E11-E12.<br><br>2) The DCC correction was not bugged or bit-rotting, and, as far as I know,<br>it was also working for 2D cases. It was removed from the code in order to clean it<br>
and make pw's compilation easier: this is because DCC required an external<br>multigrid solver an a somehow cumbersome implementation. One of the<br>developers of the original DCC code (Ismaila Dabo) is now working on a<br>
different version of the DCC that do not require multigrid solvers.<br><br>Best,<br><br>Oliviero Andreussi<br><br>Postdoctoral Associate MIT-DMSE,<br>77 Massachusetts Ave, Office 13-4084,<br>Cambridge, MA, 02139 USA<br><br>
Visiting Researcher, Oxford University,<br>Departments of Materials, Rex Richards Building,<br>Park Road, Oxford, Oxon, OX1 3PH, UK<br><br>-------- Original Message --------<br>> Subject: [Pw_forum] Problems with electrostatic corrections<br>
> (Makov-Payne or density counter charge) for aperiodic systems in 4.2 version<br>> Date: Tue, 5 Apr 2011 13:33:22 +0100<br>> From: ilyes hamdi<<a href="mailto:iiysidi@gmail.com">iiysidi@gmail.com</a>><br>
> Reply-To: PWSCF Forum<<a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a>><br>> To: <a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a><<a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a>><br>
><br>><br>><br>> Dear espresso users,<br>><br>> 1) I'm a bit confused with the implementation of the Makov-Payne<br>> electrostatic corrections in PW/makov-payne.f90 in Version 4.2:<br>> - In their original paper, Makov and Payne define the first order<br>
> correction E11 = - \alpha * q^2/(2\epsilon* L). This quantity should be<br>> negative for cubic systems, however<br>> the writing statement in PW/makov-payne.f90 subroutine defines the<br>> correction to be -E11 (leading to a positive value).<br>
> - The second order correction E12 is propertional to q Q/L^3 and is also<br>> defined as -E12 in the PW/makov-payne.f90 subroutine.<br>> - The total energy should be Etot = E0 + E11 + E12 = E0 - \alpha *<br>
> q^2/(2\epsilon* L) + 2 \pi * q * Q / ( 3 \epsilon L^3), however it is<br>> implemented as E0-E11-E12.<br>><br>> 2) Why the DCC correction is disabled in version 4.2? Indeed when<br>> setting asume_isolated = 'dcc' and&EE input parameters to their<br>
> default values, I obtain an error message "DCC correction is disabled".<br>> Looking in the PW/input.f90 subroutine, one found that the dcc<br>> correction is disabled by an immediate call to errore subroutine.<br>
><br>> Please can any one verify if I'm correct or did I miss something.<br>><br>> Best regards<br>><br>><br>><br><br><br><br>------------------------------<br><br>Message: 5<br>Date: Tue, 05 Apr 2011 18:06:22 +0200<br>
From: Markus Meinert <<a href="mailto:meinert@physik.uni-bielefeld.de">meinert@physik.uni-bielefeld.de</a>><br>Subject: [Pw_forum] Nonlinear scaling with pool parallelization<br>To: <a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a><br>
Message-ID:<br> <<a href="mailto:13591_1302019580_ZZh0n7G2X1ZlY.00_4D9B3DFE.9060400@physik.uni-bielefeld.de">13591_1302019580_ZZh0n7G2X1ZlY.00_4D9B3DFE.9060400@physik.uni-bielefeld.de</a>><br><br>Content-Type: text/plain; charset=ISO-8859-15; format=flowed<br>
<br>Dear QE users and developers,<br><br>for some larger calculations I just obtained a small Beowulf type<br>cluster, consisting of three machines with hexacore i7 CPUs, connected<br>with gigabit ethernet. It runs Ubuntu 64bit, QE 4.2.1 is compiled with<br>
GCC 4.4, and I have compiled OpenMPI 1.4.3. The code is linked against<br>the pre-compiled libatlas-corei7sse3 from Ubuntu.<br><br>I want to perform calculations of quite large supercells for interface<br>and surface studies of magnetic materials. Now, I'm testing the<br>
parallelization schemes. My understanding is that pool parallelization<br>should scale approximately linearly with nodes. Indeed, the calculation<br>of a bulk material scales nearly linearly with the number of nodes when<br>
I assign each node an individual pool. In contrast, if I do not assign<br>pools, the calculations slow down extremely because of the communication<br>overhead.<br><br>Now we come to the strange part: the slab calculation. I did some quick<br>
timing tests which I would like to share with you. The times given in<br>seconds are just the numbers the code provides when it runs (checked<br>them however with htop).<br><br>WITH pools:<br>np npool setup first iteration<br>
6 1 108s 250s<br>12 2 78s 180s<br>18 3 69s 152s<br><br>WITHOUT pools:<br>np setup first iteration<br>6 108s 250s<br>12 75s 186s<br>18 59s 152s<br>
<br>Without pools I have heavy load on the ethernet, but the calculations<br>are about as fast as the ones with pools. With pools, there's almost no<br>communication, apart from a few bursts. More importantly, the scaling of<br>
the calculation with pools is far from linear. With three machines, I<br>get less than a factor of two in speed. The gain when going from two to<br>three machines is just of the order of 25%.<br><br>My program call is:<br>
mpirun -np 18 -hostfile ~/.mpi_hostfile<br>~/espresso/espresso-4.2.1/bin/pw.x -npool 3 -in <a href="http://pw.in/" target="_blank">pw.in</a> | tee pw.out<br><br>The pw.x program understands the call:<br><br> Parallel version (MPI), running on 18 processors<br>
K-points division: npool = 3<br> R & G space division: proc/pool = 6<br><br>Can you explain this behavior? Is there anything I can tune to get a<br>better scaling? Is there a known bottleneck for a setup like this? Can<br>
this be associated with the choice of k-point meshes? For bulk I have a<br>shifted 8x8x8 mesh, for the slab I have a 8 8 1 1 1 0 setting.<br><br>If you would like to have the input files to reproduce the problem,<br>please tell me.<br>
<br>With kind regards,<br>Markus Meinert<br><br><br>--<br>Dipl.-Phys. Markus Meinert<br><br>Thin Films and Physics of Nanostructures<br>Department of Physics<br>Bielefeld University<br>Universit?tsstra?e 25<br>33615 Bielefeld<br>
<br>Room D2-118<br>e-mail: <a href="mailto:meinert@physik.uni-bielefeld.de">meinert@physik.uni-bielefeld.de</a><br>Phone: +49 521 106 2661<br><br><br>------------------------------<br><br>Message: 6<br>Date: Tue, 05 Apr 2011 18:54:34 +0200<br>
From: Paolo Giannozzi <<a href="mailto:giannozz@democritos.it">giannozz@democritos.it</a>><br>Subject: Re: [Pw_forum] Nonlinear scaling with pool parallelization<br>To: PWSCF Forum <<a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a>><br>
Message-ID: <<a href="mailto:1302022474.6852.36.camel@fe12lx.fisica.uniud.it">1302022474.6852.36.camel@fe12lx.fisica.uniud.it</a>><br>Content-Type: text/plain<br><br>On Tue, 2011-04-05 at 18:06 +0200, Markus Meinert wrote:<br>
<br>> For bulk I have a shifted 8x8x8 mesh, for the slab I have a 8 8 1 1 1 0 setting.<br><br>how many k-points? P.<br>--<br> Paolo Giannozzi, Dept. Chemistry&Physics&Environment,<br> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy<br>
Phone +39-0432-558216, fax +39-0432-558222<br><br><br><br>------------------------------<br><br>Message: 7<br>Date: Tue, 05 Apr 2011 19:54:35 +0200<br>From: Markus Meinert <<a href="mailto:meinert@physik.uni-bielefeld.de">meinert@physik.uni-bielefeld.de</a>><br>
Subject: [Pw_forum] Nonlinear scaling with pool parallelization<br>To: <a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a><br>Message-ID:<br> <<a href="mailto:17556_1302026072_ZZh0n0H4l0ks_.00_4D9B575B.4060208@physik.uni-bielefeld.de">17556_1302026072_ZZh0n0H4l0ks_.00_4D9B575B.4060208@physik.uni-bielefeld.de</a>><br>
<br>Content-Type: text/plain; charset=ISO-8859-15; format=flowed<br><br>Sorry, I should go into a little bit more detail here. I used an<br>_unshifted_ k-mesh. When I use a 8x8x8 mesh, yielding 58 points, I get a<br>speedup of about 2 on three machines. With a 12x12x12 mesh with 144<br>
points the speedup becomes better (factor 2.6 for cpu time and 2.3 for<br>WALL). I used to think that's because of communication. In the former<br>case, an iteration takes less than a second. With more k points, the<br>
speedup converges slowly towards 3.<br><br>The slab has 20 k points. But, since a single iteration takes about 100<br>seconds, I do not see where the time is being spent, when the k points<br>are independent.<br><br>Regards,<br>
Markus<br><br>--<br>Dipl.-Phys. Markus Meinert<br><br>Thin Films and Physics of Nanostructures<br>Department of Physics<br>Bielefeld University<br>Universit?tsstra?e 25<br>33615 Bielefeld<br><br>Room D2-118<br>e-mail: <a href="mailto:meinert@physik.uni-bielefeld.de">meinert@physik.uni-bielefeld.de</a><br>
Phone: +49 521 106 2661<br><br><br>------------------------------<br><br>Message: 8<br>Date: Tue, 5 Apr 2011 21:54:02 +0200<br>From: Paolo Giannozzi <<a href="mailto:giannozz@democritos.it">giannozz@democritos.it</a>><br>
Subject: Re: [Pw_forum] Nonlinear scaling with pool parallelization<br>To: PWSCF Forum <<a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a>><br>Message-ID: <<a href="mailto:65F016CA-33E7-47CA-A04B-3FCBCFCB143B@democritos.it">65F016CA-33E7-47CA-A04B-3FCBCFCB143B@democritos.it</a>><br>
Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed<br><br>On Apr 5, 2011, at 19:54 , Markus Meinert wrote:<br><br>> I used an _unshifted_ k-mesh<br><br>it doesn't matter if it is shifted or unshifted: only the number of k-<br>
points<br>matters for k-point parallelization.<br><br>> The slab has 20 k points.<br><br>20 k-points on 3 processors = 7+7+6: load balancing is not ideal.<br>This is likely to be a minor factor, though.<br><br>> But, since a single iteration takes about 100 seconds, I do not<br>
> see where the time is being spent, when the k points are independent.<br><br>you do not see because you do not know where to look. Not that it<br>is explained somewhere...have a look into the final report:<br>* the time spent in "c_bands" and called routines is proportional to the<br>
number of k-points, so it will scale linearly with the number of<br>"k-point pools"<br>* the time spent in "sum_band" is only in part proportional to the<br>number<br> of k-points and will partially scale<br>
* the time spent in "v_of_rho", "newd", "mix_rho", is independent<br>upon the<br> number of of k-points and will not scale at all<br>* k-point parallelization does not reduce memory<br>* The rest is usually irrelevant<br>
Also note that<br>* FFT parallelization distributes most memory<br>* FFT parallelization speeds up (with varying efficiency) almost all<br>routines,<br> with the exception of "cdiaghg" or "rdiaghg"<br>
* linear-algebra parallelization (that you are not using) will (not<br>always) speed<br> up "cdiaghg" or "rdiaghg" and distribute more memory<br>Alles klar?<br><br>P.<br>---<br>Paolo Giannozzi, Dept of Chemistry&Physics&Environment,<br>
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy<br>Phone +39-0432-558216, fax +39-0432-558222<br><br><br><br><br><br><br>------------------------------<br><br>Message: 9<br>Date: Tue, 5 Apr 2011 15:42:56 -0600<br>From: Tram Bui <<a href="mailto:trambui@u.boisestate.edu">trambui@u.boisestate.edu</a>><br>
Subject: [Pw_forum] Generating ultra soft pseudopotential<br>To: PWSCF Forum <<a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a>><br>Message-ID: <BANLkTi=<a href="mailto:pmuBY4W3_84vsE_m1okCtMSg0rQ@mail.gmail.com">pmuBY4W3_84vsE_m1okCtMSg0rQ@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br><br>Dear Folks,<br> I'm trying to build an input file for generating the Cs<br>pseudopotential. when I run the ld1.x on my input file. I got an error<br>message saying that i'm using the wrong core. would you help me with extra<br>
information of where I can find the help for how to choose the right core<br>for some atoms such as in my case it is Cs.<br><br>Thank you very much,<br><br>Tram Bui<br><br>M.S. Materials Science & Engineering<br><a href="mailto:trambui@u.boisestate.edu">trambui@u.boisestate.edu</a><br>
-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <a href="http://www.democritos.it/pipermail/pw_forum/attachments/20110405/3dc4c2e3/attachment-0001.htm" target="_blank">http://www.democritos.it/pipermail/pw_forum/attachments/20110405/3dc4c2e3/attachment-0001.htm</a><br>
<br>------------------------------<br><br>Message: 10<br>Date: Tue, 5 Apr 2011 20:35:08 -0400<br>From: Duy Le <<a href="mailto:ttduyle@gmail.com">ttduyle@gmail.com</a>><br>Subject: Re: [Pw_forum] Generating ultra soft pseudopotential<br>
To: PWSCF Forum <<a href="mailto:pw_forum@pwscf.org">pw_forum@pwscf.org</a>><br>Message-ID: <BANLkTinwZhgYzbc+H_wsrxCPOO_SsJ=<a href="mailto:1tg@mail.gmail.com">1tg@mail.gmail.com</a>><br>Content-Type: text/plain; charset=ISO-8859-1<br>
<br>Some available PP of Cs can be found at<br><a href="http://charter.cnf.cornell.edu/psplist.php?element=Cs" target="_blank">http://charter.cnf.cornell.edu/psplist.php?element=Cs</a><br>If you can not find what you need there, you can follow the<br>
instruction (videos and slides) July 25<br><a href="http://media.quantum-espresso.org/santa_barbara_2009_07/index.php" target="_blank">http://media.quantum-espresso.org/santa_barbara_2009_07/index.php</a><br>--------------------------------------------------<br>
Duy Le<br>PhD Student<br>Department of Physics<br>University of Central Florida.<br><br>"Men don't need hand to do things"<br><br><br><br>On Tue, Apr 5, 2011 at 5:42 PM, Tram Bui <<a href="mailto:trambui@u.boisestate.edu">trambui@u.boisestate.edu</a>> wrote:<br>
> Dear Folks,<br>> ???? I'm trying to build an input file for generating the Cs<br>> pseudopotential. when I run the ld1.x on my input file. I got an error<br>> message saying that i'm using the wrong core. would you help me with extra<br>
> information of where I can find the help for how to choose the right core<br>> for some atoms such as in my case it is Cs.<br>><br>> Thank you very much,<br>> Tram Bui<br>><br>> M.S. Materials Science & Engineering<br>
> <a href="mailto:trambui@u.boisestate.edu">trambui@u.boisestate.edu</a><br>><br>><br>> _______________________________________________<br>> Pw_forum mailing list<br>> <a href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a><br>
> <a href="http://www.democritos.it/mailman/listinfo/pw_forum" target="_blank">http://www.democritos.it/mailman/listinfo/pw_forum</a><br>><br>><br><br><br>------------------------------<br><br>_______________________________________________<br>
Pw_forum mailing list<br><a href="mailto:Pw_forum@pwscf.org">Pw_forum@pwscf.org</a><br><a href="http://www.democritos.it/mailman/listinfo/pw_forum" target="_blank">http://www.democritos.it/mailman/listinfo/pw_forum</a><br>
<br><br>End of Pw_forum Digest, Vol 46, Issue 15<br>****************************************<br></blockquote></div>
<div>I'm trying calculate density of state and other quantities for SrS but i have problems about concept of different parameters in pseudopotentials, for example on top of Teter pseudopotential we have :</div>
<div> </div>
<div> </div>
<div><font size="1" face="courier new,monospace">(Ar+3d10) + 4s2 4p6 5s0 4d0; rcs=rcp=rcd=1.7, no chem-hard, exnc(11);ecut 25/34<br> 38.00000 10.00000 950923 z,zion,pspdat<br> 4 3 2 2 2001 0 pspcod,pspxc,lmax,lloc,mmax,r2well<br>
0 0 0 2 1.69654886360351576 l,e99.0,e99.9,nproj,rcpsp<br> .000 .000 .000 .000 rms,ekb1,ekb2,epsatm<br> 1 0 0 2 1.69654886360351576 l,e99.0,e99.9,nproj,rcpsp<br> .000 .000 .000 .000 rms,ekb1,ekb2,epsatm<br>
2 0 0 0 1.69654886360351576 l,e99.0,e99.9,nproj,rcpsp<br> .000 .000 .000 .000 rms,ekb1,ekb2,epsatm<br> .000 .000 .000 rchrg,fchrg,qchrg<br> 0 = l<br>
<font size="2" face="arial,helvetica,sans-serif"></font></font></div>
<div><font size="1" face="courier new,monospace"><font size="2" face="arial,helvetica,sans-serif">is the first line of this pseudopotenial means that only 8 valence electron are used in calculations (for example calculation of density of states, bandstructure....)and 4d and 5s orbitals are empty and don't participate in calculation?</font></font></div>
<div> </div>
<div>Thanks</div>