[Pw_forum] openmpi 1.3.3

Giovanni Cantele Giovanni.Cantele at na.infn.it
Mon Oct 5 11:50:25 CEST 2009


Carlo Nervi wrote:
> Dear forum users,
> I tried to compile QE (snapshot 25-09-09 version) under linux with ifort 
> 11.0 and MKL libraries 10.1.1.019 using OpenMPI 1.3.3
> As far as I read OpenMPI 1.3 has improved performances over the previous 
> versions...
> I (apparently) successfully compiled OpenMPI with Intel ifort (the 
> hello.World program run on all 8 core of my linux pc).
>
> If I run pw.x in serial mode (OMP_NUM_THREADS=1) I got the correct 
> results, but if I try to run pw.x using OMP_NUM_THREADS=8 either it runs 
> forever (no convergence) reporting erroneous results, or it crash.
> If I run it by "mpirun -np 8 pw.x -in scf.it > scf.out &" (using 
> OMP_NUM_THREAD=1), i get always the following message:
> "MPI_ABORT was invoked on rank 5 in communicator MPI_COMM_WORLD"
>
> I found the following url:
> http://software.intel.com/en-us/forums/intel-math-kernel-library/topic/68207/
> where they explain that
> "...Since Open MPI considers MPI_COMM_WORLD to be a pointer it turns out 
> to be 64-bit long. Whereas Cluster FFT was designed in times where 
> sizeof(MPI_Comm) used to be 32-bit. In order to work correctly with Open 
> MPI you just need to wrap the communicator as follows:
> DftiCreateDescriptorDM(MPI_Comm_c2f(MPI_COMM_WORLD),&desc,DFTI_DOUBLE,DFTI_COMPLEX,1,len);"
>
> I don't have enough experience to understand if this could be really 
> relevant to the present topic, could be useful to the whole QE community 
> that would like to compile QE using ifort and MKL, or it is totally 
> irrelevant and unuseful...
>
> Somebody have tried to successfully use OpenMPI v 1.3.3?
> Maybe with ifort and MKL?
> I would greatly appreciate any comments. Thanks!
> 	Carlo
>   

Hi,
I succesfully run (but always with OMP_NUM_THREADS=1) QE from 3.2.3 to 
4.1 under:
ifort (IFORT) 10.1 20080112
MKL 10.0.1.014
mpirun (Open MPI) 1.3.2

OpenMPI (not sure whether this is relevant!) was configured using:
./configure CC=icc F77=ifort F90=ifort FC=ifort CXX=icpc OBJC=icc 
FCFLAGS=-i-dynamic CFLAGS=-i-dynamic CXXFLAGS=-i-dynamic 
--with-tm=/usr/local --prefix=/opt/openmpi/1.3.2/ifort
(--with-tm is irrelevant, used only if you want support for PBS/Torque)

I don't know if switching from 1.3.2 to 1.3.3 might give problems. 
However, I previously installed also openmpi-1.2.5, and even in that 
case it worked.

Regarding the errors under  OMP_NUM_THREADS=8  :  did you follow these 
suggestions (found in a previous thread on this forum)?

    DFLAGS=...-D__OPENMP _D__FFTW + ifort -openmp + mkl
is tested and safe; other combinations may still run into trouble
due to conflicts between "internal" OpenMP and autothreading
libraries.

I've not experience with using OMP_NUM_THREADS>1, though.

Hope this helps,

    Giovanni

-- 



Dr. Giovanni Cantele
Coherentia CNR-INFM and Dipartimento di Scienze Fisiche
Universita' di Napoli "Federico II"
Complesso Universitario di Monte S. Angelo - Ed. 6
Via Cintia, I-80126, Napoli, Italy
Phone: +39 081 676910
Fax:   +39 081 676346
E-mail: giovanni.cantele at cnr.it
        giovanni.cantele at na.infn.it
Web: http://people.na.infn.it/~cantele
Research Group: http://www.nanomat.unina.it
Skype contact: giocan74



More information about the Pw_forum mailing list