<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Thanks for your reply. I had started an earlier thread under the same heading and <br>you had replied-no other reason for using your name only. I should have continued it but it was a<br>few weeks old.<br><br><div>> From: paolo.giannozzi@uniud.it<br>> To: pw_forum@pwscf.org<br>> Date: Wed, 12 Jun 2013 18:30:23 +0200<br>> Subject: Re: [Pw_forum] Parallelization<br>> <br>> On Tue, 2013-06-11 at 19:57 +0000, vijaya subramanian wrote:<br>> <br>> > Hi Paolo<br>> <br>> you know, there are 1605 subscribed user on the pw_forum mailing list.<br>> Even if part of them are actually disabled, it is a lot of people. <br>> Why do you address to me?<br>> <br>> Your unit cells are quite large, your cutoff is not small, and you<br>> use spin-orbit, a feature that increases the memory footprint and <br>> is less optimized than "plain-vanilla" calculations. In order to <br>> run such large jobs, one needs to know quite a bit about the<br>> inner working of parallelization, which arrays are distributed,<br>> which are not ... The following arrays, for instance:<br>> <br>> > Each <psi_i|beta_j> matrix 350.63 Mb ( 5440, 2, 2112)<br>> <br>> are not distributed. This is the kind of arrays that causes bottlenecks.<br>> If you have N mpi processes per node, you have N such arrays filling<br>> the same physical memory. Reducing the number of MPI processes per node<br>> and using OpenMP instead might be a good strategy.<br>> <br>> P.<br>> -- <br>> Paolo Giannozzi, Dept. Chemistry&Physics&Environment, <br>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy<br>> Phone +39-0432-558216, fax +39-0432-558222 <br>> <br>> _______________________________________________<br>> Pw_forum mailing list<br>> Pw_forum@pwscf.org<br>> http://pwscf.org/mailman/listinfo/pw_forum<br></div> </div></body>
</html>