pyop2 user documentation

pyop2 Package

PyOP2 is a library for parallel computations on unstructured meshes and delivers performance-portability across a range of platforms:

  • multi-core CPU (sequential, OpenMP, OpenCL and MPI)
  • GPU (CUDA and OpenCL)

Initialization and finalization

pyop2.init(**kwargs)

Initialise PyOP2: select the backend and potentially other configuration options.

Parameters:
  • backend – Set the hardware-specific backend. Current choices are "sequential", "openmp", "opencl", "cuda".
  • debug – The level of debugging output.
  • comm – The MPI communicator to use for parallel communication, defaults to MPI_COMM_WORLD
  • log_level – The log level. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL

For debugging purposes, init accepts all keyword arguments accepted by the PyOP2 Configuration object, see Configuration.__init__() for details of further accepted options.

Note

Calling init again with a different backend raises an exception. Changing the backend is not possible. Calling init again with the same backend or not specifying a backend will update the configuration. Calling init after exit has been called is an error and will raise an exception.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

pyop2.exit()

Exit OP2 and clean up

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

Data structures

class pyop2.Set(size=None, name=None, halo=None)

OP2 set.

Parameters:
  • size (integer or list of four integers.) – The size of the set.
  • dim (integer or tuple of integers) – The shape of the data associated with each element of this Set.
  • name (string) – The name of the set (optional).
  • halo – An exisiting halo to use (optional).

When the set is employed as an iteration space in a pyop2.op2.par_loop(), the extent of any local iteration space within each set entry is indicated in brackets. See the example in pyop2.op2.par_loop() for more details.

The size of the set can either be an integer, or a list of four integers. The latter case is used for running in parallel where we distinguish between:

  • CORE (owned and not touching halo)
  • OWNED (owned, touching halo)
  • EXECUTE HALO (not owned, but executed over redundantly)
  • NON EXECUTE HALO (not owned, read when executing in the execute halo)

If a single integer is passed, we assume that we’re running in serial and there is no distinction.

The division of set elements is:

[0, CORE)
[CORE, OWNED)
[OWNED, EXECUTE HALO)
[EXECUTE HALO, NON EXECUTE HALO).

Halo send/receive data is stored on sets in a Halo.

all_part
core_part
core_size

Core set size. Owned elements not touching halo elements.

exec_part
exec_size

Set size including execute halo elements.

If a ParLoop is indirect, we do redundant computation by executing over these set elements as well as owned ones.

classmethod fromhdf5(f, name)

Construct a Set from set named name in HDF5 data f

halo

Halo associated with this Set

layers

Return None (not an ExtrudedSet).

name

User-defined label

owned_part
partition_size

Default partition size

size

Set size, owned elements.

sizes

Set sizes: core, owned, execute halo, total.

total_size

Total set size, including halo elements.

class pyop2.ExtrudedSet(size=None, name=None, halo=None)

OP2 set.

Parameters:
  • size (integer or list of four integers.) – The size of the set.
  • dim (integer or tuple of integers) – The shape of the data associated with each element of this Set.
  • name (string) – The name of the set (optional).
  • halo – An exisiting halo to use (optional).

When the set is employed as an iteration space in a pyop2.op2.par_loop(), the extent of any local iteration space within each set entry is indicated in brackets. See the example in pyop2.op2.par_loop() for more details.

The size of the set can either be an integer, or a list of four integers. The latter case is used for running in parallel where we distinguish between:

  • CORE (owned and not touching halo)
  • OWNED (owned, touching halo)
  • EXECUTE HALO (not owned, but executed over redundantly)
  • NON EXECUTE HALO (not owned, read when executing in the execute halo)

If a single integer is passed, we assume that we’re running in serial and there is no distinction.

The division of set elements is:

[0, CORE)
[CORE, OWNED)
[OWNED, EXECUTE HALO)
[EXECUTE HALO, NON EXECUTE HALO).

Halo send/receive data is stored on sets in a Halo.

all_part
core_part
core_size

Core set size. Owned elements not touching halo elements.

exec_part
exec_size

Set size including execute halo elements.

If a ParLoop is indirect, we do redundant computation by executing over these set elements as well as owned ones.

classmethod fromhdf5(f, name)

Construct a Set from set named name in HDF5 data f

halo

Halo associated with this Set

layers

Return None (not an ExtrudedSet).

name

User-defined label

owned_part
partition_size

Default partition size

size

Set size, owned elements.

sizes

Set sizes: core, owned, execute halo, total.

total_size

Total set size, including halo elements.

class pyop2.Subset(superset, indices)

OP2 subset.

Parameters:
  • superset (a Set or a Subset.) – The superset of the subset.
  • indices (a list of integers, or a numpy array.) – Elements of the superset that form the subset. Duplicate values are removed when constructing the subset.
all_part
core_part
core_size

Core set size. Owned elements not touching halo elements.

exec_part
exec_size

Set size including execute halo elements.

If a ParLoop is indirect, we do redundant computation by executing over these set elements as well as owned ones.

classmethod fromhdf5(f, name)

Construct a Set from set named name in HDF5 data f

halo

Halo associated with this Set

indices

Returns the indices pointing in the superset.

layers

The number of layers in this extruded set.

name

User-defined label

owned_part
parent
partition_size

Default partition size

size

Set size, owned elements.

sizes

Set sizes: core, owned, execute halo, total.

superset

Returns the superset Set

total_size

Total set size, including halo elements.

class pyop2.MixedSet(sets)

A container for a bag of Sets.

Parameters:sets (iterable) – Iterable of Sets or ExtrudedSets
all_part
core_part
core_size

Core set size. Owned elements not touching halo elements.

exec_part
exec_size

Set size including execute halo elements.

classmethod fromhdf5(f, name)

Construct a Set from set named name in HDF5 data f

halo

Halos associated with these Sets.

layers

Numbers of layers in the extruded mesh (or None if this MixedSet is not extruded).

name

User-defined labels.

owned_part
partition_size

Default partition size

size

Set size, owned elements.

sizes

Set sizes: core, owned, execute halo, total.

split

The underlying tuple of Sets.

total_size

Total set size, including halo elements.

class pyop2.DataSet(iter_set, dim=1, name=None)

PyOP2 Data Set

Set used in the op2.Dat structures to specify the dimension of the data.

cdim

The scalar number of values for each member of the set. This is the product of the dim tuple.

dim

The shape tuple of the values for each element of the set.

name

Returns the name of the data set.

set

Returns the parent set of the data set.

class pyop2.MixedDataSet(arg, dims=None)

A container for a bag of DataSets.

Initialized either from a MixedSet and an iterable or iterator of dims of corresponding length

mdset = op2.MixedDataSet(mset, [dim1, ..., dimN])

or from a tuple of Sets and an iterable of dims of corresponding length

mdset = op2.MixedDataSet([set1, ..., setN], [dim1, ..., dimN])

If all dims are to be the same, they can also be given as an int for either of above invocations

mdset = op2.MixedDataSet(mset, dim)
mdset = op2.MixedDataSet([set1, ..., setN], dim)

Initialized from a MixedSet without explicitly specifying dims they default to 1

mdset = op2.MixedDataSet(mset)

Initialized from an iterable or iterator of DataSets and/or Sets, where Sets are implicitly upcast to DataSets of dim 1

mdset = op2.MixedDataSet([dset1, ..., dsetN])
Parameters:
  • arg – a MixedSet or an iterable or a generator expression of Sets or DataSets or a mixture of both
  • dimsNone (the default) or an int or an iterable or generator expression of ints, which must be of same length as arg

Warning

When using generator expressions for arg or dims, these must terminate or else will cause an infinite loop.

cdim

The sum of the scalar number of values for each member of the sets. This is the sum of products of the dim tuples.

dim

The shape tuple of the values for each element of the sets.

name

Returns the name of the data sets.

set

Returns the MixedSet this MixedDataSet is defined on.

split

The underlying tuple of DataSets.

class pyop2.Map(iterset, toset, arity, values=None, name=None, offset=None, parent=None, bt_masks=None)

OP2 map, a relation between two Set objects.

Each entry in the iterset maps to arity entries in the toset. When a map is used in a pyop2.op2.par_loop(), it is possible to use Python index notation to select an individual entry on the right hand side of this map. There are three possibilities:

  • No index. All arity Dat entries will be passed to the kernel.
  • An integer: some_map[n]. The n th entry of the map result will be passed to the kernel.
  • An IterationIndex, some_map[pyop2.i[n]]. n will take each value from 0 to e-1 where e is the n th extent passed to the iteration space for this pyop2.op2.par_loop(). See also i.

For extruded problems (where iterset is an ExtrudedSet) with boundary conditions applied at the top and bottom of the domain, one needs to provide a list of which of the arity values in each map entry correspond to values on the bottom boundary and which correspond to the top. This is done by supplying two lists of indices in bt_masks, the first provides indices for the bottom, the second for the top.

arange

Tuple of arity offsets for each constituent Map.

arities

Arity of the mapping: number of toset elements mapped to per iterset element.

Return type:tuple
arity

Arity of the mapping: number of toset elements mapped to per iterset element.

bottom_mask

The bottom layer mask to be applied on a mesh cell.

classmethod fromhdf5(iterset, toset, f, name)

Construct a Map from set named name in HDF5 data f

implicit_bcs

Return any implicit (extruded “top” or “bottom”) bcs to apply to this Map. Normally empty except in the case of some DecoratedMaps.

iteration_region

Return the iteration region for the current map. For a normal map it will always be ALL. For a DecoratedMap it will specify over which mesh region the iteration will take place.

iterset

Set mapped from.

name

User-defined label

offset

The vertical offset.

split
top_mask

The top layer mask to be applied on a mesh cell.

toset

Set mapped to.

values

Mapping array.

This only returns the map values for local points, to see the halo points too, use values_with_halo().

values_with_halo

Mapping array.

This returns all map values (including halo points), see values() if you only need to look at the local points.

class pyop2.MixedMap(maps)

A container for a bag of Maps.

Parameters:maps (iterable) – Iterable of Maps
arange

Tuple of arity offsets for each constituent Map.

arities

Arity of the mapping: number of toset elements mapped to per iterset element.

Return type:tuple
arity

Arity of the mapping: total number of toset elements mapped to per iterset element.

bottom_mask

The bottom layer mask to be applied on a mesh cell.

classmethod fromhdf5(iterset, toset, f, name)

Construct a Map from set named name in HDF5 data f

implicit_bcs

Return any implicit (extruded “top” or “bottom”) bcs to apply to this Map. Normally empty except in the case of some DecoratedMaps.

iteration_region

Return the iteration region for the current map. For a normal map it will always be ALL. For a DecoratedMap it will specify over which mesh region the iteration will take place.

iterset

MixedSet mapped from.

name

User-defined labels

offset

Vertical offsets.

split

The underlying tuple of Maps.

top_mask

The top layer mask to be applied on a mesh cell.

toset

MixedSet mapped to.

values

Mapping arrays excluding data for halos.

This only returns the map values for local points, to see the halo points too, use values_with_halo().

values_with_halo

Mapping arrays including data for halos.

This returns all map values (including halo points), see values() if you only need to look at the local points.

class pyop2.Sparsity(dsets, maps, name=None)

OP2 Sparsity, the non-zero structure a matrix derived from the union of the outer product of pairs of Map objects.

Examples of constructing a Sparsity:

Sparsity(single_dset, single_map, 'mass')
Sparsity((row_dset, col_dset), (single_rowmap, single_colmap))
Sparsity((row_dset, col_dset),
         [(first_rowmap, first_colmap), (second_rowmap, second_colmap)])
Parameters:
  • dsetsDataSets for the left and right function spaces this Sparsity maps between
  • maps (a pair of Maps specifying a row map and a column map, or an iterable of pairs of Maps specifying multiple row and column maps - if a single Map is passed, it is used as both a row map and a column map) – Maps to build the Sparsity from
  • name (string) – user-defined label (optional)
cmaps

The list of column maps this sparsity is assembled from.

colidx

Column indices array of CSR data structure.

dims

A pair giving the number of rows per entry of the row Set and the number of columns per entry of the column Set of the Sparsity.

dsets

A pair of DataSets for the left and right function spaces this Sparsity maps between.

maps

A list of pairs (rmap, cmap) where each pair of Map objects will later be used to assemble into this matrix. The iterset of each of the maps in a pair must be the same, while the toset of all the maps which appear first must be common, this will form the row Set of the sparsity. Similarly, the toset of all the maps which appear second must be common and will form the column Set of the Sparsity.

name

A user-defined label.

ncols

The number of columns in the Sparsity.

nnz

Array containing the number of non-zeroes in the various rows of the diagonal portion of the local submatrix.

This is the same as the parameter d_nnz used for preallocation in PETSc’s MatMPIAIJSetPreallocation.

nrows

The number of rows in the Sparsity.

nz

Number of non-zeroes in the diagonal portion of the local submatrix.

onnz

Array containing the number of non-zeroes in the various rows of the off-diagonal portion of the local submatrix.

This is the same as the parameter o_nnz used for preallocation in PETSc’s MatMPIAIJSetPreallocation.

onz

Number of non-zeroes in the off-diagonal portion of the local submatrix.

rmaps

The list of row maps this sparsity is assembled from.

rowptr

Row pointer array of CSR data structure.

shape

Number of block rows and columns.

class pyop2.Const(dim, data=None, name=None, dtype=None)

Data that is constant for any element of any set.

exception NonUniqueNameError

The Names of const variables are required to be globally unique. This exception is raised if the name is already in use.

args
message
class Const.Snapshot(obj)

Overridden from DataCarrier; a snapshot is always valid as long as the Const object still exists

is_valid()
Const.cdim

The scalar number of values for each member of the object. This is the product of the dim tuple.

Const.create_snapshot()

Returns a snapshot of the current object. If not overriden, this method will return a full duplicate object.

Const.ctype

The c type of the data.

Const.data

Data array.

Const.dim

The shape tuple of the values for each element of the object.

Const.dtype

The Python type of the data.

Const.duplicate()

A Const duplicate can always refer to the same data vector, since it’s read-only

classmethod Const.fromhdf5(f, name)

Construct a Const from const named name in HDF5 data f

Const.name

User-defined label.

Const.remove_from_namespace()

Remove this Const object from the namespace

This allows the same name to be redeclared with a different shape.

class pyop2.Global(dim, data=None, dtype=None, name=None)

OP2 global value.

When a Global is passed to a pyop2.op2.par_loop(), the access descriptor is passed by calling the Global. For example, if a Global named G is to be accessed for reading, this is accomplished by:

G(pyop2.READ)

It is permissible to pass None as the data argument. In this case, allocation of the data buffer is postponed until it is accessed.

Note

If the data buffer is not passed in, it is implicitly initialised to be zero.

class Snapshot(obj)

A snapshot of the current state of the DataCarrier object. If is_valid() returns True, then the object hasn’t changed since this snapshot was taken (and still exists).

is_valid()
Global.cdim

The scalar number of values for each member of the object. This is the product of the dim tuple.

Global.create_snapshot()

Returns a snapshot of the current object. If not overriden, this method will return a full duplicate object.

Global.ctype

The c type of the data.

Global.data

Data array.

Global.data_ro

Data array.

Global.dim

The shape tuple of the values for each element of the object.

Global.dtype
Global.duplicate()

Return a deep copy of self.

Global.name

User-defined label.

Global.nbytes

Return an estimate of the size of the data associated with this Global in bytes. This will be the correct size of the data payload, but does not take into account the overhead of the object and its metadata. This renders this method of little statistical significance, however it is included to make the interface consistent.

Global.shape
Global.soa

Are the data in SoA format? This is always false for Global objects.

class pyop2.Dat(dataset, data=None, dtype=None, name=None, soa=None, uid=None)

OP2 vector data. A Dat holds values on every element of a DataSet.

If a Set is passed as the dataset argument, rather than a DataSet, the Dat is created with a default DataSet dimension of 1.

If a Dat is passed as the dataset argument, a copy is returned.

It is permissible to pass None as the data argument. In this case, allocation of the data buffer is postponed until it is accessed.

Note

If the data buffer is not passed in, it is implicitly initialised to be zero.

When a Dat is passed to pyop2.op2.par_loop(), the map via which indirection occurs and the access descriptor are passed by calling the Dat. For instance, if a Dat named D is to be accessed for reading via a Map named M, this is accomplished by

D(pyop2.READ, M)

The Map through which indirection occurs can be indexed using the index notation described in the documentation for the Map. Direct access to a Dat is accomplished by omitting the path argument.

Dat objects support the pointwise linear algebra operations +=, *=, -=, /=, where *= and /= also support multiplication / division by a scalar.

class Snapshot(obj)

A snapshot for SetAssociated objects is valid if the snapshot version is the same as the current version of the object

is_valid()
Dat.cdim

The scalar number of values for each member of the object. This is the product of the dim tuple.

Dat.copy(other, subset=None)

Copy the data in this Dat into another.

Parameters:
  • other – The destination Dat
  • subset – A Subset of elements to copy (optional)

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

Dat.create_snapshot()

Returns a snapshot of the current object. If not overriden, this method will return a full duplicate object.

Dat.ctype

The c type of the data.

Dat.data

Numpy array containing the data values.

With this accessor you are claiming that you will modify the values you get back. If you only need to look at the values, use data_ro() instead.

This only shows local values, to see the halo values too use data_with_halos().

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

Dat.data_ro

Numpy array containing the data values. Read-only.

With this accessor you are not allowed to modify the values you get back. If you need to do so, use data() instead.

This only shows local values, to see the halo values too use data_ro_with_halos().

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

Dat.data_ro_with_halos

A view of this Dats data.

This accessor does not mark the Dat as dirty, and is a read only view, see data_ro() for more details on the semantics.

With this accessor, you get to see up to date halo values, but you should not try and modify them, because they will be overwritten by the next halo exchange.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

Dat.data_with_halos

A view of this Dats data.

This accessor marks the Dat as dirty, see data() for more details on the semantics.

With this accessor, you get to see up to date halo values, but you should not try and modify them, because they will be overwritten by the next halo exchange.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

Dat.dataset

DataSet on which the Dat is defined.

Dat.dim

The shape of the values for each element of the object.

Dat.dtype
Dat.duplicate()
classmethod Dat.fromhdf5(dataset, f, name)

Construct a Dat from a Dat named name in HDF5 data f

Dat.halo_exchange_begin()

Begin halo exchange.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

Dat.halo_exchange_end()

End halo exchange. Waits on MPI recv.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

Dat.inner(other)

Compute the l2 inner product of the flattened Dat

Parameters:other – the other Dat to compute the inner product against.
Dat.load(filename)

Read the data stored in file filename into a NumPy array and store the values in _data().

Dat.name

User-defined label.

Dat.nbytes

Return an estimate of the size of the data associated with this Dat in bytes. This will be the correct size of the data payload, but does not take into account the (presumably small) overhead of the object and its metadata.

Note that this is the process local memory usage, not the sum over all MPI processes.

Dat.needs_halo_update

Has this Dat been written to since the last halo exchange?

Dat.norm

Compute the l2 norm of this Dat

Note

This acts on the flattened data (see also inner()).

Dat.save(filename)

Write the data array to file filename in NumPy format.

Dat.shape
Dat.soa

Are the data in SoA format?

Dat.split

Tuple containing only this Dat.

Dat.zero()

Zero the data associated with this Dat

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

class pyop2.MixedDat(mdset_or_dats)

A container for a bag of Dats.

Initialized either from a MixedDataSet, a MixedSet, or an iterable of DataSets and/or Sets, where all the Sets are implcitly upcast to DataSets

mdat = op2.MixedDat(mdset)
mdat = op2.MixedDat([dset1, ..., dsetN])

or from an iterable of Dats

mdat = op2.MixedDat([dat1, ..., datN])
class Snapshot(obj)

A snapshot for SetAssociated objects is valid if the snapshot version is the same as the current version of the object

is_valid()
MixedDat.cdim

The scalar number of values for each member of the object. This is the product of the dim tuple.

MixedDat.copy(other, subset=None)

Copy the data in this MixedDat into another.

Parameters:
  • other – The destination MixedDat
  • subset – Subsets are not supported, this must be None

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

MixedDat.create_snapshot()

Returns a snapshot of the current object. If not overriden, this method will return a full duplicate object.

MixedDat.ctype

The c type of the data.

MixedDat.data

Numpy arrays containing the data excluding halos.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

MixedDat.data_ro

Numpy arrays with read-only data excluding halos.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

MixedDat.data_ro_with_halos

Numpy arrays with read-only data including halos.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

MixedDat.data_with_halos

Numpy arrays containing the data including halos.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

MixedDat.dataset

MixedDataSets this MixedDat is defined on.

MixedDat.dim

The shape of the values for each element of the object.

MixedDat.dtype

The NumPy dtype of the data.

MixedDat.duplicate()
classmethod MixedDat.fromhdf5(dataset, f, name)

Construct a Dat from a Dat named name in HDF5 data f

MixedDat.halo_exchange_begin()

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

MixedDat.halo_exchange_end()

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

MixedDat.inner(other)

Compute the l2 inner product.

Parameters:other – the other MixedDat to compute the inner product against
MixedDat.load(filename)

Read the data stored in file filename into a NumPy array and store the values in _data().

MixedDat.name

User-defined label.

MixedDat.nbytes

Return an estimate of the size of the data associated with this MixedDat in bytes. This will be the correct size of the data payload, but does not take into account the (presumably small) overhead of the object and its metadata.

Note that this is the process local memory usage, not the sum over all MPI processes.

MixedDat.needs_halo_update

Has this Dat been written to since the last halo exchange?

MixedDat.norm

Compute the l2 norm of this Dat

Note

This acts on the flattened data (see also inner()).

MixedDat.save(filename)

Write the data array to file filename in NumPy format.

MixedDat.shape
MixedDat.soa

Are the data in SoA format?

MixedDat.split

The underlying tuple of Dats.

MixedDat.zero()

Zero the data associated with this MixedDat.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

class pyop2.Mat(sparsity, dtype=None, name=None)

OP2 matrix data. A Mat is defined on a sparsity pattern and holds a value for each element in the Sparsity.

When a Mat is passed to pyop2.op2.par_loop(), the maps via which indirection occurs for the row and column space, and the access descriptor are passed by calling the Mat. For instance, if a Mat named A is to be accessed for reading via a row Map named R and a column Map named C, this is accomplished by:

A(pyop2.READ, (R[pyop2.i[0]], C[pyop2.i[1]]))

Notice that it is always necessary to index the indirection maps for a Mat. See the Mat documentation for more details.

Note

After executing par_loop()s that write to a Mat and before using it (for example to view its values), you must call assemble() to finalise the writes.

class Snapshot(obj)

A snapshot for SetAssociated objects is valid if the snapshot version is the same as the current version of the object

is_valid()
Mat.assemble()

Finalise this Mat ready for use.

Call this /after/ executing all the par_loops that write to the matrix before you want to look at it.

Mat.cdim

The scalar number of values for each member of the object. This is the product of the dim tuple.

Mat.create_snapshot()

Returns a snapshot of the current object. If not overriden, this method will return a full duplicate object.

Mat.ctype

The c type of the data.

Mat.dim

The shape tuple of the values for each element of the object.

Mat.dims

A pair of integers giving the number of matrix rows and columns for each member of the row Set and column Set respectively. This corresponds to the cdim member of a DataSet.

Mat.dtype

The Python type of the data.

Mat.name

User-defined label.

Mat.nbytes

Return an estimate of the size of the data associated with this Mat in bytes. This will be the correct size of the data payload, but does not take into account the (presumably small) overhead of the object and its metadata. The memory associated with the sparsity pattern is also not recorded.

Note that this is the process local memory usage, not the sum over all MPI processes.

Mat.sparsity

Sparsity on which the Mat is defined.

Mat.values

A numpy array of matrix values.

Warning

This is a dense array, so will need a lot of memory. It’s probably not a good idea to access this property if your matrix has more than around 10000 degrees of freedom.

Parallel loops, kernels and linear solves

pyop2.par_loop(kernel, iterset, *args, **kwargs)

Invocation of an OP2 kernel

Parameters:
  • kernel – The Kernel to be executed.
  • iterset – The iteration Set over which the kernel should be executed.
  • *args – One or more base.Args constructed from a Global, Dat or Mat using the call syntax and passing in an optionally indexed Map through which this base.Arg is accessed and the base.Access descriptor indicating how the Kernel is going to access this data (see the example below). These are the global data structures from and to which the kernel will read and write.
  • iterate

    Optionally specify which region of an ExtrudedSet to iterate over. Valid values are:

    • ON_BOTTOM: iterate over the bottom layer of cells.
    • ON_TOP iterate over the top layer of cells.
    • ALL iterate over all cells (the default if unspecified)
    • ON_INTERIOR_FACETS iterate over all the layers
      except the top layer, accessing data two adjacent (in the extruded direction) cells at a time.

Warning

It is the caller’s responsibility that the number and type of all base.Args passed to the par_loop() match those expected by the Kernel. No runtime check is performed to ensure this!

If a par_loop() argument indexes into a Map using an base.IterationIndex, this implies the use of a local base.IterationSpace of a size given by the arity of the Map. It is an error to have several arguments using local iteration spaces of different size.

par_loop() invocation is illustrated by the following example

pyop2.par_loop(mass, elements,
               mat(pyop2.INC, (elem_node[pyop2.i[0]]), elem_node[pyop2.i[1]]),
               coords(pyop2.READ, elem_node))

This example will execute the Kernel mass over the Set elements executing 3x3 times for each Set member, assuming the Map elem_node is of arity 3. The Kernel takes four arguments, the first is a Mat named mat, the second is a field named coords. The remaining two arguments indicate which local iteration space point the kernel is to execute.

A Mat requires a pair of Map objects, one each for the row and column spaces. In this case both are the same elem_node map. The row Map is indexed by the first index in the local iteration space, indicated by the 0 index to pyop2.i, while the column space is indexed by the second local index. The matrix is accessed to increment values using the pyop2.INC access descriptor.

The coords Dat is also accessed via the elem_node Map, however no indices are passed so all entries of elem_node for the relevant member of elements will be passed to the kernel as a vector.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

pyop2.solve(A, x, b)

Solve a matrix equation using the default Solver

Parameters:
  • A – The Mat containing the matrix.
  • x – The Dat to receive the solution.
  • b – The Dat containing the RHS.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

class pyop2.Kernel(code, name, opts={}, include_dirs=[], headers=[], user_code='')

OP2 kernel type.

Parameters:
  • code – kernel function definition, including signature; either a string or an AST Node
  • name – kernel function name; must match the name of the kernel function given in code
  • opts – options dictionary for PyOP2 IR optimisations (optional, ignored if code is a string)
  • include_dirs – list of additional include directories to be searched when compiling the kernel (optional, defaults to empty)
  • headers – list of system headers to include when compiling the kernel in the form #include <header.h> (optional, defaults to empty)
  • user_code – code snippet to be executed once at the very start of the generated kernel wrapper code (optional, defaults to empty)

Consider the case of initialising a Dat with seeded random values in the interval 0 to 1. The corresponding Kernel is constructed as follows:

op2.Kernel("void setrand(double *x) { x[0] = (double)random()/RAND_MAX); }",
           name="setrand",
           headers=["#include <stdlib.h>"], user_code="srandom(10001);")

Note

When running in parallel with MPI the generated code must be the same on all ranks.

cache_key

Cache key.

code

String containing the c code for this kernel routine. This code must conform to the OP2 user kernel API.

name

Kernel name, must match the kernel function name in the code.

class pyop2.Solver(parameters=None, **kwargs)

OP2 Solver object. The Solver holds a set of parameters that are passed to the underlying linear algebra library when the solve method is called. These can either be passed as a dictionary parameters or as individual keyword arguments (combining both will cause an exception).

Recognized parameters either as dictionary keys or keyword arguments are:

Parameters:
  • ksp_type – the solver type (‘cg’)
  • pc_type – the preconditioner type (‘jacobi’)
  • ksp_rtol – relative solver tolerance (1e-7)
  • ksp_atol – absolute solver tolerance (1e-50)
  • ksp_divtol – factor by which the residual norm may exceed the right-hand-side norm before the solve is considered to have diverged: norm(r) >= dtol*norm(b) (1e4)
  • ksp_max_it – maximum number of solver iterations (10000)
  • error_on_nonconvergence – abort if the solve does not converge in the maximum number of iterations (True, if False only a warning is printed)
  • ksp_monitor – print the residual norm after each iteration (False)
  • plot_convergence – plot a graph of the convergence history after the solve has finished and save it to file (False, implies ksp_monitor)
  • plot_prefix – filename prefix for plot files (‘’)
  • ksp_gmres_restart – restart period when using GMRES
solve(A, x, b)

Solve a matrix equation.

Parameters:
  • A – The Mat containing the matrix.
  • x – The Dat to receive the solution.
  • b – The Dat containing the RHS.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

update_parameters(parameters)

Update solver parameters

Parameters:parameters – Dictionary containing the parameters to update.

This function is logically collective over MPI ranks, it is an error to call it on fewer than all the ranks in MPI communicator.

pyop2.i = IterationIndex(None)

Shorthand for constructing IterationIndex objects.

i[idx] builds an IterationIndex object for which the index property is idx.

pyop2.READ = Access('READ')

The Global, Dat, or Mat is accessed read-only.

pyop2.WRITE = Access('WRITE')

The Global, Dat, or Mat is accessed write-only, and OP2 is not required to handle write conflicts.

pyop2.RW = Access('RW')

The Global, Dat, or Mat is accessed for reading and writing, and OP2 is not required to handle write conflicts.

pyop2.INC = Access('INC')

The kernel computes increments to be summed onto a Global, Dat, or Mat. OP2 is responsible for managing the write conflicts caused.

pyop2.MIN = Access('MIN')

The kernel contributes to a reduction into a Global using a min operation. OP2 is responsible for reducing over the different kernel invocations.

pyop2.MAX = Access('MAX')

The kernel contributes to a reduction into a Global using a max operation. OP2 is responsible for reducing over the different kernel invocations.

Table Of Contents

Previous topic

Profiling

Next topic

pyop2 package

This Page