[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [h5md-user] Variable-size particle groups

From: Olaf Lenz
Subject: Re: [h5md-user] Variable-size particle groups
Date: Tue, 29 May 2012 17:03:24 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120428 Thunderbird/12.0.1

Hash: SHA1


On 05/29/2012 12:48 PM, Peter Colberg wrote:
>> In general, standardizing such a subgroup might also be useful in
>>  simulations with fixed particle numbers, e.g. for parallel IO, 
>> where the particles are not always stored on the same CPU.
> Am I correct that one would *always* want to write the particle 
> identities with parallel IO? Since particles move between 
> processors, a fixed order (e.g. the initial order) would require
> each processor to perform a scattered write to the “value” dataset,
> which is probably quite slow with HDF5. By storing particles
> according to their current order in memory, each processor could
> write to a linear region in the dataset.

That is exactly what I had in mind. A scattered write is not real
parallel IO, therefore the ids might be of interest.

Note, however, that this is still not the whole solution, as the number
of particles per task might vary from timestep to timestep. Parallel IO
is only efficient when each task knows exactly where in the file to
write, and to do that, the write size of each task has to be known
beforehand. Therefore it would be necessary to leave "holes" in the
trajectory data, as for example undefined positions.

This probably also explains why the "range" dataset causes trouble for
parallel IO, as in that case, the chunk to be written or read by each of
the tasks is not known beforehand.


- -- 
Dr. rer. nat. Olaf Lenz
Institut für Computerphysik, Pfaffenwaldring 27, D-70569 Stuttgart
Phone: +49-711-685-63607
Version: GnuPG v2.0.18 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/


reply via email to

[Prev in Thread] Current Thread [Next in Thread]