![]() ![]() ![]() ![]() Next: Understanding PUGH Output Up: PUGH Previous: Periodic Boundary Conditions Contents Processor DecompositionBy default PUGH will distribute the computational grid evenly across all processors (as in Figure I1.2a). This may not be efficient if there is a different computational load on different processors, or for example for a simulation distributed across processors with different per-processor performance.
The computational grid can be manually partitioned in each direction in a regularly way as in Figure I1.2b.
The computational grid can be manually distributed using PUGH's
string parameters
The decomposition is easiest to explain with a simple example:
to distribute a 30-cubed grid across 4 processors (decomposed as
you would use the following topology and partition parameter settings:
# the overall grid size PUGH::global_nsize = 30 # processor topology PUGH::processor_topology = manual PUGH::processor_topology_3d_x = 2 PUGH::processor_topology_3d_y = 1 PUGH::processor_topology_3d_z = 2 # redundant # grid partitioning PUGH::partition = "manual" PUGH::partition_3d_x = "20 10"
Each partition parameter lists the number of grid points for every processor
in that direction, with the numbers delimited by any non-digit characters.
Note that an empty string for a direction (which is the default value for
the partition parameters) will apply the automatic distribution. That's why
it is not necessary to set
Because the previous automatic distribution gave problems in some
cases (e.g. very long box in one, but short in other directions),
there is now an improved algorithm that tries to do a better job in
decomposing the grid evenly to the processors. However, it can fail
in certain situations, in which it is gracefully falling back to the
previous (
![]() ![]() ![]() ![]() Next: Understanding PUGH Output Up: PUGH Previous: Periodic Boundary Conditions Contents |