[Pw_forum] XSpectra can not run
Matteo Calandra
matteo.calandra at impmc.jussieu.fr
Tue Apr 12 09:21:06 CEST 2011
> hi, Paolo,
> Thank Matteo and you very much! Well, I got some similar reading
> error when I tried to do XSpectra calculation with my own input file.
> Could you please see the post
> http://www.democritos.it/pipermail/pw_forum/2011-April/020020.html
> for me (I apologize if you've already seen it)? Thank you very much!
> best wishes
> Yu Zhang
>
Dear Yu Zhang,
I think the problem is that some of your pseudopotentials do not
have gipaw reconstruction inside.
In principle this is needed only for the absorbing atom, but for
the moment the code requires that all the pseudo do have gipaw
reconstruction.
I have to modify this as soon as I have some time.
Th second problem is that xspectra does not work with the
Gamma only version of the code. So you should run both the
scf and xspectra not with
K_POINTS gamma
but with
K_POINTS automatic
1 1 1 0 0 0
It should then work.
M.
> On 04/08/2011 12:18, Paolo Giannozzi wrote:
>> On Apr 8, 2011, at 20:49 , Yu Zhang wrote:
>>
>>> This really solves my problem! Thank you very much.
>> you should thank Matteo Calandra, not me
>>
>>> Could you please explain briefly what the cause of the problem is?
>> the problem is that some recent changes to the initialization of FFT
>> grids, G-vectors, and other basic stuff, had not been propagated
>> to that particular routine. I had forgotten that XSpectra uses a
>> different routine to read the data file from all other QE codes.
>> Just an overlook.
>>
>> Paolo
>> ---
>> Paolo Giannozzi, Dept of Chemistry&Physics&Environment,
>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>> Phone +39-0432-558216, fax +39-0432-558222
>>
>>
>>
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://www.democritos.it/mailman/listinfo/pw_forum
>>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 11 Apr 2011 17:15:33 -0500
> From: "Nichols A. Romero" <naromero at gmail.com>
> Subject: Re: [Pw_forum] I/O performance on BG/P systems
> To: PWSCF Forum <pw_forum at pwscf.org>
> Cc: Gabriele Sclauzero <sclauzer at sissa.it>
> Message-ID: <BANLkTi=69xUZNwVpeOhmN29gCC=kK7Pkxg at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Gabriele,
>
> A couple of other technical details that I am remembering:
> 1. The BG/P at ANL has 1 I/O node per 64 nodes, 8 I/O nodes per mid-plane
> 2. The bottleneck of writing multiple files per MPI tasks does not
> become serious
> until about 8+ racks.
>
> On Mon, Apr 11, 2011 at 3:50 PM, Gabriele Sclauzero <sclauzer at sissa.it> wrote:
>> Dear Paolo and Nichols,
>> ?? ?as a follow up, I had a brief meeting with the sysadmin of our local
>> BGP. It looks like the timings I was reporting actually correspond to the
>> maximum I/O throughput of that specific rack, which depends on the number of
>> I/O nodes present on the rack itself (in that case, 4 I/O nodes per
>> midplane, each of them capable of 350 MB/s, corresponding to 1.7 GB/s for
>> that midplane).
>> In the example I was reporting:
>> ?? ? davcio ??????: ??1083.30s?CPU ??1083.30s WALL ( ?????38 calls)
>> I've been running on just 128 nodes (512 cores in VN mode), therefore I had
>> only one I/O node (1 midplane = 4 x 128 nodes, for the non-BG/P-ers). Now,
>> the total size of the .wfc files was around 9200 MB, which cannot be written
>> in less than 9200/350 = 26.3 sec, according to the figures that the
>> sysadmins gave me.
>> In my case the timings give:?1083.30s/38=28.5s, which is close to the
>> theoretical maximum.
>> I will perform more testing and I will take into consideration the
>> suggestion of Nichols about the number of files per node. In our machine we
>> have one rack with 16 I/O nodes per midplane, I will try to see if the I/O
>> performance scales accordingly.
>> As a side effect, I met a problem in the timing procedure. I found very
>> different davcio timings (i.e. 3 orders of magnitude!) for two jobs where
>> the size of the wavefunctions differed by a factor 2 only (the jobs?have
>> been executed on the same rack and with the same number of processors and
>> same parallelization scheme).
>> The sysadmins replied that I/O bandwidth measured in the?fastest case is not
>> attainable on BG/P, and should be imputed to an inaccurate measurement of
>> cputime/walltime.
>> I'm going to investigate this anyway.
>> I'm not aware of anyone working on MPI I/O porting.
>> Thanks so far for your suggestions,
>>
>>
>> Gabriele
>>
>>
>> Il giorno 11/apr/2011, alle ore 20.02, Nichols A. Romero ha scritto:
>>
>> Sorry for not replying earlier, but I missed this e-mail due to the
>> APS March Meeting.
>>
>> The GPFS file system on BG/P does a poor job at handling writes to more than
>> one file per node. My guess is that Gabriele was running QE in either dual
>> or VN mode (2 and 4 MPI tasks per node, respectively). So on BG/P,
>> you basically
>> want to write one file per node (which GPFS is designed to handle) or
>> one big file
>> using MPI-I/O.
>>
>> At ANL, we are thinking about re-writing some of the I/O
>> using parallel I/O (e.g. HDF5, Parallel NetCDF). The simplest
>> approach, though highly
>> unportable, is to use the MPI I/O directly.
>>
>> Has anyone on this list worked on parallel I/O with QE? Or have any
>> strong opinions
>> on this issue?
>>
>>
>> On Wed, Mar 30, 2011 at 11:57 AM, Paolo Giannozzi
>> <giannozz at democritos.it> wrote:
>>
>> On Mar 30, 2011, at 11:20 , Gabriele Sclauzero wrote:
>>
>> Do you think that having an additional optional level of I/O
>>
>> (let's say that it might be called "medium")
>>
>> I propose 'rare', 'medium', 'well done'
>>
>> would be too confusing for users?
>>
>> some users get confused no matter what
>>
>> I could try to implement and test it.
>>
>> ok: just follow the "io_level" variable. Try first to understand
>>
>> what the actual behavior is (the documentation is not so
>>
>> clear on this point) and then think what it should be, if you
>>
>> have some clear ideas
>>
>> P.
>>
>> ---
>>
>> Paolo Giannozzi, Dept of Chemistry&Physics&Environment,
>>
>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>>
>> Phone +39-0432-558216, fax +39-0432-558222
>>
>>
>>
>>
>> _______________________________________________
>>
>> Pw_forum mailing list
>>
>> Pw_forum at pwscf.org
>>
>> http://www.democritos.it/mailman/listinfo/pw_forum
>>
>>
>>
>>
>> --
>> Nichols A. Romero, Ph.D.
>> Argonne Leadership Computing Facility
>> Argonne, IL 60490
>> (630) 447-9793
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://www.democritos.it/mailman/listinfo/pw_forum
>>
>>
>> ? Gabriele Sclauzero,?EPFL SB ITP CSEA
>> ?? PH H2 462, Station 3,?CH-1015 Lausanne
>>
>> _______________________________________________
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://www.democritos.it/mailman/listinfo/pw_forum
>>
>>
>
>
>
--
* * * *
Matteo Calandra, Charge de Recherche (CR1)
Institut de Minéralogie et de Physique des Milieux Condensés de Paris
Université Pierre et Marie Curie, tour 23, 3eme etage, case 115
4 Place Jussieu, 75252 Paris Cedex 05 France
Tel: +33-1-44 27 52 16 Fax: +33-1-44 27 37 85
http://www.impmc.jussieu.fr/~calandra
More information about the Pw_forum
mailing list