[Pw_forum] time for calculation of el-ph interactions of graphene supercell
L.F.Huang
lfhuang at theory.issp.ac.cn
Mon Sep 8 03:40:59 CEST 2008
Dear prof. Kohlmeyer:
Thank you for your kind help!
I have read your previous demonstration to vega. I am also doing
some tests now. I am sorry for my limitation in computer knowledge!
And the computer I am using is thousands miles away from me,so I
just find information from its webpage, which I list below and try
to make it as complete as possible:
Peak Performance: 10.2Tflops
Computing Nodes: 512 4-way nodes,each composed of 4 CPUs
Storage Nodes: 16 4-way nodes,each composed of 4 CPUs
Accessing Nodes: 4 4-way nodes,each composed of 4 CPUs
CPU: AMD OPTERON 850, 2.4GHz,2128 CPU tatally
System Memory: 4256GB
System Storage: 20TB
Architecture: Cluster、Myrinet 2000
Operating System: Turbo Linux 8.0
Language: C, C++, Fortran77/90/95, Java
Compliers: gcc3.2.2-4, Java1.2 , tcl/tk 8.0, Perl 5.0
Mathematic Libraries: NAG ACML 1.5, ATLAS 3.6
Parallel Environment: BCL4、Myricom GM, MPI 1.2, PVM 3.3.6, JobManager
Tools: gdb5.3, OpenLDAP2.2, Python2.3, X-window
Sysytem Administration: LSF, Dawning Cluster Operating System, DCOS,
including DCMS(Dawning Cluster Management System
), DCIS(Dawning Cluster Installation System),
DCMM(Dawning Cluster Monitor and Management
System), Mterm(Multi-terminal software), Dawing
Large-Scale KVM System(SKVM)
Another question is:
Prof. Kohlmeyer, you have mentioned that not all components in
Q-E support k-points parallelization fully. I wonder are there
still any other components like that besides postprocessing components?
Are elphon.f90 and elph.f90?
Thanks!
Yours Sincerely
L.F.Huang
> On Sat, 6 Sep 2008, L.F.Huang wrote:
>
> LFH> PostScript:
> LFH> Amount of memory: 2GB per node
> LFH> kind of interconnect: myrinet 2000
> LFH> No. of reduced k points: 52
> LFH> No. of nodes: 13 (with npool by default 1)
>
> parallelization over k-points is always the
> first item to try. not all components in Q-E
> support it fully, but when it is supported,
> it is superior over the g-space parallelization,
> which in itself has scaling limitationss. please
> see how in my previous demonstration to vega,
> the execution time goes up significantly, when
> using too many nodes and the wrong parallelization
> scheme. i would also check, whether your machine
> is communicating correctly. i've seen myrinet machines
> just stall occasionally, due to firmware crashes.
>
> LFH> Parallel environment: MPI 1.2
>
> i assume it is MPICH-GM. mpi 1.2 does not make that
> much sense (there is an MPI 1.2 standard but that
> numbering is usually independent of the MPI standard
> version. e.g., i'm using OpenMPI 1.2.7 and it implements
> all of MPI 2.0 ... ).
>
> LFH> Math library: NAG ACML 1.5、ATLAS 3.6
>
> ACML is from AMD not NAG, and that would conflict
> with ATLAS as both implement BLAS/LAPACK.
>
> LFH> No. of cpus per node: 4
>
> which type? looks like opteron dual-core?
>
> cheers,
> axel.
>
> --
> =======================================================================
> Axel Kohlmeyer akohlmey at cmm.chem.upenn.edu http://www.cmm.upenn.edu
> Center for Molecular Modeling -- University of Pennsylvania
> Department of Chemistry, 231 S.34th Street, Philadelphia, PA 19104-6323
> tel: 1-215-898-1582, fax: 1-215-573-6233, office-tel: 1-215-898-5425
> =======================================================================
> If you make something idiot-proof, the universe creates a better idiot.
More information about the Pw_forum
mailing list