[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)
Alexander G. Kvashnin
agkvashnin at gmail.com
Tue Mar 8 10:06:29 CET 2011
Previously I used also 16 nodes, when I calculate with ABINIT and there is
no problem for its running.
I asked my administrator about it, he said that anything alright with
policy.
On 8 March 2011 07:48, Huiqun Zhou <hqzhou at nju.edu.cn> wrote:
> Alexander,
>
> According to your reply to my message, you actually applied 64 CPU cores
> (16 nodes, 4 cores per node), this should have no problem unless the policy
> of using your cluster prohibited it. Once upon a time, we had such a policy
> on our cluster: an job occupies at most 32 CPU cores, otherwise put it into
> sequential queue.
>
> Maybe, you should ask your administrator whether there is such a policy ...
>
> zhou huiqun
> @earth sciences, nanjing university, china
>
>
> ----- Original Message -----
> *From:* Alexander G. Kvashnin <agkvashnin at gmail.com>
> *To:* PWSCF Forum <pw_forum at pwscf.org>
> *Sent:* Tuesday, March 08, 2011 12:24 AM
> *Subject:* Re: [Pw_forum] НА: Re: problem in MPI running of QE (16
> processors)
>
> Dear all
>
> I tried to use full paths, but it didn't give positive results. It wrote an
> error message
>
> application called MPI_Abort(MPI_COMM_WORLD, 0) - process 0
>
>
> On 7 March 2011 10:30, Alexander Kvashnin <agkvashnin at gmail.com> wrote:
>
>> Thanks, I tried to use "<" instead of "-in" it also didn't work.
>> OK,I will try to use full paths for input and output, and answer about
>> result.
>>
>> ----- Исходное сообщение -----
>> От: Omololu Akin-Ojo <prayerz.omo at gmail.com>
>> Отправлено: 7 марта 2011 г. 9:56
>> Кому: PWSCF Forum <pw_forum at pwscf.org>
>> Тема: Re: [Pw_forum] НА: Re: problem in MPI running of QE (16 processors)
>>
>> Try to see if specifying the full paths help.
>> E.g., try something like:
>>
>> mpiexec /home/MyDir/bin/pw.x -in /scratch/MyDir/graph.inp >
>> /scratch/MyDir/graph.out
>>
>> (where /home/MyDir/bin is the full path to your pw.x and
>> /scratch/MyDir/graph.inp is the full path to your output ....)
>>
>> ( I see you use "-in" instead of "<" to indicate the input. I don't
>> know too much but _perhaps_ you could also _try_ using "<" instead of
>> "-in") .
>>
>> o.
>>
>> On Mon, Mar 7, 2011 at 7:31 AM, Alexander Kvashnin <agkvashnin at gmail.com>
>> wrote:
>> > Yes, I wrote
>> >
>> > #PBS -l nodes=16:ppn=4
>> >
>> > And in userguide of MIPT-60 wrote,that mpiexec must choose number of
>> > processors automatically, that's why I didn't write anything else
>> >
>> >
>> > ________________________________
>> > От: Huiqun Zhou <hqzhou at nju.edu.cn>
>> > Отправлено: 7 марта 2011 г. 7:52
>> > Кому: PWSCF Forum <pw_forum at pwscf.org>
>> > Тема: Re: [Pw_forum] problem in MPI running of QE (16 processors)
>> >
>> > How did you apply number of node, procs per node in your job
>> > script?
>> >
>> > #PBS -l nodes=?:ppn=?
>> >
>> > zhou huiqun
>> > @earth sciences, nanjing university, china
>> >
>> >
>> > ----- Original Message -----
>> > From: Alexander G. Kvashnin
>> > To: PWSCF Forum
>> > Sent: Saturday, March 05, 2011 2:53 AM
>> > Subject: Re: [Pw_forum] problem in MPI running of QE (16 processors)
>> > I create PBS task on supercomputer MIPT-60 where I write
>> >
>> > mpiexec ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt
>> > all other
>>
>> [Включен не весь текст исходного сообщения]
>>
>
>
>
> --
> Sincerely yours
> Alexander G. Kvashnin
>
> --------------------------------------------------------------------------------------------------------------------------------
> Student
> Moscow Institute of Physics and Technology http://mipt.ru/
> 141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia
>
> Junior research scientist
> Technological Institute for Superhard
> and Novel Carbon Materials
> http://www.ntcstm.troitsk.ru/
> 142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia
> ================================================================
>
> ------------------------------
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
>
>
--
Sincerely yours
Alexander G. Kvashnin
--------------------------------------------------------------------------------------------------------------------------------
Student
Moscow Institute of Physics and Technology http://mipt.ru/
141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia
Junior research scientist
Technological Institute for Superhard
and Novel Carbon Materials
http://www.ntcstm.troitsk.ru/
142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia
================================================================
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.democritos.it/pipermail/pw_forum/attachments/20110308/de79756e/attachment.htm
More information about the Pw_forum
mailing list