Thanks a lot Dr.Kohlmeyer, I will ask our sysadmin and let you know if I can successfully compile the parallel version.<br> <br> Best regards,<br> <br> Stargmoon<br><br><b><i>Axel Kohlmeyer <akohlmey@cmm.chem.upenn.edu></i></b> wrote:<blockquote class="replbq" style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;"> On 9/18/06, stargmoon <stargmoon @yahoo.com=""> wrote:<br>> Dear Dr.Kohlmeyer,<br>><br>> Thanks for your reply.<br>><br>> I am just one user of our cluster, that is, I am not the system<br>> administrator. I do not know the detail of the installation of softwares<br><br>so please _ask_ your sysadmin, this is the person 'in the know'<br>about the details. as i wrote before, all machines/clusters are<br>slightly different, so there is no way to predict what might be a problem.<br><br>> there. But MPICH was there, and can be launched by "module add". And I was<br>> told that MPICH was installed
ourselves, not bundled with the machines.<br><br>so then your sysadmin _has_ to know...<br><br>> The command line I used to do "configure" is "./configure<br>> MPI_LIBS="-L/opt/mpi/tcp/mpich-pgi/lib -lmpich -lfmpich"<br>><br>> Could you please give me more hints what I should do to figure out this<br>> problem?<br><br>as i wrote before, please try to compile and run _another_ MPI program,<br>best one of the MPI tutorial examples, as they are rather trivial.<br>once you get that working, let us know what command line you needed<br>for the successful compile that produced a usable executable,<br>we can look into getting QE compiled.<br><br>><br>> By the way, have you ever used pathscale to compile espresso? It seems to<br>> be possible from the update information for espresso3.1.1. We have pathscale<br>> and mpich for pathscale work for VASP on our cluster.<br><br>i managed to do it a long time ago (including a few manual hacks).<br>but i didn't
have access to a machine with pathscale for quite a while.<br>with the addition of the iotk library, probably a few more tweaks are/were<br>needed. i would not expect a large difference between pathscale and PGI<br>on AMD64 machines. most of the speed comes from the design of the<br>cpu itself and usually numerical codes like QE are faster, if you _lower_<br>the optimization (and especially avoid IPA/IPO and heuristic vectorization).<br>QE already takes a lot of advantage of SIMD instructions through the<br>use of optimized BLAS/LAPACK libraries (i.e. ACML on AMD64).<br><br>cheers,<br> axel.<br><br>><br>> Best,<br>><br>> Stargmoon<br>><br>> Axel Kohlmeyer <akohlmey @cmm.chem.upenn.edu=""> wrote:<br>> On 9/17/06, stargmoon wrote:<br>> > Dear pwscf community,<br>> ><br>> > I tried to compile Espresso-3.1.1 recently on our PC cluster (AMD64).<br>> > However, after I run ./configure, I am told that "Parallel environment
not<br>> > detected". I checked the config.log, since there is no problem in seaching<br>> > for the MPI compilers (mpif90, mpif77 and mpicc), I think it must be the<br>> MPI<br>><br>> there are two stages of the search. a) whether the executables exists<br>> and b) whether they can produce working binaries.<br>><br>> > library problem. Therefore, I tried to set "MPI_LIBS" (there is only<br>> > libmpich.a in there) in the ./configure command line, but it did not work<br>><br>> i would have expected a libfmpich.a, too.<br>><br>> > either. Could anybody please tell me what kind of MPI libraries I have to<br>> > point to the "configure" in order to get parallel compilation?<br>><br>> this is impossible to tell, without knowing any details about your system.<br>><br>> what parallel software are you using (it looks like MPICH) and did you<br>> install it yourself or was it bundled with the system? do
mpif77 and<br>> mpif90 point to a sufficient fortran<br>><br>> can you compile and run any of the (trivial) MPI example programs<br>> that usually ship with MPI packages, and if yes, please describe the<br>> commandline you use for that. based on that information, we may<br>> be able to help you.<br>><br>> especially on linux machines, there are almost always a few kinks<br>> to be worked out in the installation.<br>><br>> regards,<br>> axel.<br>><br>> ><br>> > Thanks in advance!<br>> ><br>> > stargmoon<br>> ><br>> ><br>> > ________________________________<br>> > Get your email and more, right on the new Yahoo.com<br>> ><br>> ><br>><br>><br>> --<br>> =======================================================================<br>> Axel Kohlmeyer akohlmey@cmm.chem.upenn.edu http://www.cmm.upenn.edu<br>> Center for Molecular Modeling -- University of
Pennsylvania<br>> Department of Chemistry, 231 S.34th Street, Philadelphia, PA 19104-6323<br>> tel: 1-215-898-1582, fax: 1-215-573-6233, office-tel: 1-215-898-5425<br>> =======================================================================<br>> If you make something idiot-proof, the universe creates a better idiot.<br>> _______________________________________________<br>> Pw_forum mailing list<br>> Pw_forum@pwscf.org<br>> http://www.democritos.it/mailman/listinfo/pw_forum<br>><br>><br>><br>> ________________________________<br>> Stay in the know. Pulse on the new Yahoo.com. Check it out.<br>><br>><br><br><br>-- <br>=======================================================================<br>Axel Kohlmeyer akohlmey@cmm.chem.upenn.edu http://www.cmm.upenn.edu<br> Center for Molecular Modeling -- University of Pennsylvania<br>Department of Chemistry, 231 S.34th Street, Philadelphia, PA 19104-6323<br>tel: 1-215-898-1582,
fax: 1-215-573-6233, office-tel: 1-215-898-5425<br>=======================================================================<br>If you make something idiot-proof, the universe creates a better idiot.<br>_______________________________________________<br>Pw_forum mailing list<br>Pw_forum@pwscf.org<br>http://www.democritos.it/mailman/listinfo/pw_forum<br></akohlmey></stargmoon></blockquote><br><p>
<hr size=1>Yahoo! Messenger with Voice. <a href="http://us.rd.yahoo.com/mail_us/taglines/postman1/*http://us.rd.yahoo.com/evt=39663/*http://voice.yahoo.com">Make PC-to-Phone Calls</a> to the US (and 30+ countries) for 2¢/min or less.