multipatch-ware hyperslabbing (was: Re: [Developers] Thorn Cartoon2D)
Jonathan Thornburg
jthorn at aei.mpg.de
Thu May 5 06:32:05 CDT 2005
Someone (maybe Ian if I've unwrapped the nested quoting correctly) asked:
> Has there been any advance with the slabbing interface to allow the
> other symmetry thorns (ReflectionSymmetry et al) to be moved into a
> Cactus* arrangement?
Erik replied:
> No, nothing. I still think that the interface of TAT/Slab is good as it
> is, up to minor changes (such as e.g. adding an options table). The
> implementation would probably need to be cactified, which mostly means
> re-formatting the source code and stuff.
>
> There are probably a few things that could be done to improve
> performance, but since the symmetry thorns work for us as they are, I
> would consider that secondary at the moment.
My vision for "the hyperslab transfer API I'd like to have to make
GZPatchSystem multiprocessor-capable" is similar to TAT/Slab, but
generalizes this to add:
* The ability to explicitly specify (probably in an options table)
source and destination patches for interpatch slab transfers.
* The ability to specify the source and/or the destination to
(independently) be either a slab of a grid function/array, or a
slab of a local C/Fortran/etc array. TAT/Slab can sort of do this,
but I have had great difficulty figuring out *how* from the existing
documentation...
* The ability to move an arbitrary slab in a single API call, regardless
of how this is distributed across processors in the grid array case.
[The current TAT/Slab API requires that each
processor's chunk of the slab itself be a contiguous
slab.]
* The ability to specify multiple (possibly interpatch) slab transfers
(in general each of a different size), and multiple grid arrays, which
would all (potentially) be moved in parallel.
[This is a performance optimization, but I think it
may prove important in practice: For a BSSN evolution
with no symmetries, GZPatchSystem will typically want
to do a set of 24 interpatch slab transfers in parallel,
(bitant symmetry cuts this to 16 interpatch slab transfers,
not all of the same size) with each transfer moving slabs
from 17 grid functions into 17 corresponding local C arrays.
The thought of this all taking 24*17 (16*17) separate MPI
latencies is a bit painful...]
Once he finishes getting global interpolation to grok multipatch, I
hope Steve White will be able to start working on such a hyperslabber.
ciao,
--
-- Jonathan Thornburg <jthorn at aei.mpg.de>
Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut),
Golm, Germany, "Old Europe" http://www.aei.mpg.de/~jthorn/home.html
"Washing one's hands of the conflict between the powerful and the
powerless means to side with the powerful, not to be neutral."
-- quote by Freire / poster by Oxfam
More information about the Developers
mailing list