multipatch-ware hyperslabbing (was: Re: [Developers] Thorn Cartoon2D)

Erik Schnetter schnetter at aei.mpg.de
Thu May 5 13:00:23 CDT 2005


On Thursday 05 May 2005 13:32, Jonathan Thornburg wrote:
> My vision for "the hyperslab transfer API I'd like to have to make
> GZPatchSystem multiprocessor-capable" is similar to TAT/Slab, but
> generalizes this to add:
> * The ability to explicitly specify (probably in an options table)
>    source and destination patches for interpatch slab transfers.

One does not specify grid function indices, but rather pointers.  Thus 
this is already possible.

> * The ability to specify the source and/or the destination to
>    (independently) be either a slab of a grid function/array, or a
>    slab of a local C/Fortran/etc array.  TAT/Slab can sort of do
> this, but I have had great difficulty figuring out *how* from the
> existing documentation...

There is the usual latex documentation, and there are short instructions 
for experts in the header file.  The thorn TAT/SlabTest contains 
examples.  Obviously these are not good enough; I would be glad to 
discuss improvements to the documentation.

One always specifies pointers.  You can either pass a pointer to a grid 
function, or a grid array, or a local array.

> * The ability to move an arbitrary slab in a single API call,
> regardless of how this is distributed across processors in the grid
> array case. [The current TAT/Slab API requires that each
>  	processor's chunk of the slab itself be a contiguous
>  	slab.]

No, they to not have to be contiguous.  The grid arrays has a logical 
index space that is distributed across processors, and the slab has to 
be contiguous, up to a stride, in that logical index space.  (This is 
why it is called a "slab".)  The memory locations on the processors do 
not have to be contiguous.

> * The ability to specify multiple (possibly interpatch) slab
> transfers (in general each of a different size), and multiple grid
> arrays, which would all (potentially) be moved in parallel.
>  	[This is a performance optimization, but I think it
>  	may prove important in practice:  For a BSSN evolution
>  	with no symmetries, GZPatchSystem will typically want
>  	to do a set of 24 interpatch slab transfers in parallel,
>  	(bitant symmetry cuts this to 16 interpatch slab transfers,
>  	not all of the same size) with each transfer moving slabs
>  	from 17 grid functions into 17 corresponding local C arrays.
>  	The thought of this all taking 24*17 (16*17) separate MPI
>  	latencies is a bit painful...]

This would indeed be a useful optimisation.  Instead of transferring a 
slab, one transfers a list of slabs.

Regarding painful thoughts, I prefer the following order: (1) make it 
work reliably using simple methods, (2) profile, (3) find the 
bottleneck if there is any, (4) improve efficiency.  The painful 
thoughts enter only in stage 3; everything before that is only 
speculation.  See "http://c2.com/cgi/wiki?PrematureOptimization".

> Once he finishes getting global interpolation to grok multipatch, I
> hope Steve White will be able to start working on such a
> hyperslabber.

I hope that he and you and me together have a closer look at the 
existing one first.

-erik

-- 
Erik Schnetter <schnetter at aei.mpg.de>   http://www.aei.mpg.de/~eschnett/

My email is as private as my paper mail.  I therefore support encrypting
and signing email messages.  Get my PGP key from www.keyserver.net.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://www.cactuscode.org/pipermail/developers/attachments/20050505/3047823d/attachment-0002.bin 


More information about the Developers mailing list