Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
No results found
Show changes
Showing
with 1961 additions and 971 deletions
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pystencils.session import * from pystencils.session import *
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Tutorial 02: Basic Kernel generation with *pystencils* # Tutorial 02: Basic Kernel generation with *pystencils*
Now that you have an [overview of pystencils](01_tutorial_getting_started.ipynb), Now that you have an [overview of pystencils](01_tutorial_getting_started.ipynb),
this tutorial shows in more detail how to formulate, optimize and run stencil kernels. this tutorial shows in more detail how to formulate, optimize and run stencil kernels.
## 1) Kernel Definition ## 1) Kernel Definition
### a) Defining kernels with assignment lists and the `kernel` decorator ### a) Defining kernels with assignment lists and the `kernel` decorator
*pystencils* gets a symbolic formulation of the kernel. This can be either an `Assignment` or a sequence of `Assignment`s that follow a set of restrictions. *pystencils* gets a symbolic formulation of the kernel. This can be either an `Assignment` or a sequence of `Assignment`s that follow a set of restrictions.
Lets first create a kernel that consists of multiple assignments: Lets first create a kernel that consists of multiple assignments:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
src_arr = np.zeros([20, 30]) src_arr = np.zeros([20, 30])
dst_arr = np.zeros_like(src_arr) dst_arr = np.zeros_like(src_arr)
dst, src = ps.fields(dst=dst_arr, src=src_arr) dst, src = ps.fields(dst=dst_arr, src=src_arr)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
grad_x, grad_y = sp.symbols("grad_x, grad_y") grad_x, grad_y = sp.symbols("grad_x, grad_y")
symbolic_description = [ symbolic_description = [
ps.Assignment(grad_x, (src[1, 0] - src[-1, 0]) / 2), ps.Assignment(grad_x, (src[1, 0] - src[-1, 0]) / 2),
ps.Assignment(grad_y, (src[0, 1] - src[0, -1]) / 2), ps.Assignment(grad_y, (src[0, 1] - src[0, -1]) / 2),
ps.Assignment(dst[0, 0], grad_x + grad_y), ps.Assignment(dst[0, 0], grad_x + grad_y),
] ]
kernel = ps.create_kernel(symbolic_description) kernel = ps.create_kernel(symbolic_description)
symbolic_description symbolic_description
``` ```
%% Output %% Output
$$\left [ grad_{x} \leftarrow \frac{{{src}_{E}}}{2} - \frac{{{src}_{W}}}{2}, \quad grad_{y} \leftarrow \frac{{{src}_{N}}}{2} - \frac{{{src}_{S}}}{2}, \quad {{dst}_{C}} \leftarrow grad_{x} + grad_{y}\right ]$$
$\displaystyle \left[ grad_{x} \leftarrow_{} \frac{{src}_{(1,0)}}{2} - \frac{{src}_{(-1,0)}}{2}, \ grad_{y} \leftarrow_{} \frac{{src}_{(0,1)}}{2} - \frac{{src}_{(0,-1)}}{2}, \ {dst}_{(0,0)} \leftarrow_{} grad_{x} + grad_{y}\right]$
⎡ src_E src_W src_N src_S ⎤ ⎡ src_E src_W src_N src_S ⎤
⎢gradₓ := ───── - ─────, grad_y := ───── - ─────, dst_C := gradₓ + grad_y⎥ ⎢gradₓ := ───── - ─────, grad_y := ───── - ─────, dst_C := gradₓ + grad_y⎥
⎣ 2 2 2 2 ⎦ ⎣ 2 2 2 2 ⎦
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
We created subexpressions, using standard sympy symbols on the left hand side, to split the kernel into multiple assignments. Defining a kernel using a list of `Assignment`s is quite tedious and hard to read. We created subexpressions, using standard sympy symbols on the left hand side, to split the kernel into multiple assignments. Defining a kernel using a list of `Assignment`s is quite tedious and hard to read.
To simplify the formulation of a kernel, *pystencils* offers the `kernel` decorator, that transforms a normal Python function with `@=` assignments into an assignment list that can be passed to `create_kernel`. To simplify the formulation of a kernel, *pystencils* offers the `kernel` decorator, that transforms a normal Python function with `@=` assignments into an assignment list that can be passed to `create_kernel`.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
@ps.kernel @ps.kernel
def symbolic_description_using_function(): def symbolic_description_using_function():
grad_x @= (src[1, 0] - src[-1, 0]) / 2 grad_x @= (src[1, 0] - src[-1, 0]) / 2
grad_y @= (src[0, 1] - src[0, -1]) / 2 grad_y @= (src[0, 1] - src[0, -1]) / 2
dst[0, 0] @= grad_x + grad_y dst[0, 0] @= grad_x + grad_y
symbolic_description_using_function symbolic_description_using_function
``` ```
%% Output %% Output
$$\left [ grad_{x} \leftarrow \frac{{{src}_{E}}}{2} - \frac{{{src}_{W}}}{2}, \quad grad_{y} \leftarrow \frac{{{src}_{N}}}{2} - \frac{{{src}_{S}}}{2}, \quad {{dst}_{C}} \leftarrow grad_{x} + grad_{y}\right ]$$
$\displaystyle \left[ grad_{x} \leftarrow_{} \frac{{src}_{(1,0)}}{2} - \frac{{src}_{(-1,0)}}{2}, \ grad_{y} \leftarrow_{} \frac{{src}_{(0,1)}}{2} - \frac{{src}_{(0,-1)}}{2}, \ {dst}_{(0,0)} \leftarrow_{} grad_{x} + grad_{y}\right]$
⎡ src_E src_W src_N src_S ⎤ ⎡ src_E src_W src_N src_S ⎤
⎢gradₓ := ───── - ─────, grad_y := ───── - ─────, dst_C := gradₓ + grad_y⎥ ⎢gradₓ := ───── - ─────, grad_y := ───── - ─────, dst_C := gradₓ + grad_y⎥
⎣ 2 2 2 2 ⎦ ⎣ 2 2 2 2 ⎦
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The decorated function can contain any Python code, only the `@=` operator, and the ternary inline `if-else` operator have different meaning. The decorated function can contain any Python code, only the `@=` operator, and the ternary inline `if-else` operator have different meaning.
### b) Ternary 'if' with `Piecewise` ### b) Ternary 'if' with `Piecewise`
The ternary operator maps to `sympy.Piecewise` functions, that can be used to introduce branching into the kernel. Piecewise defined functions must give a value for every input, i.e. there must be a 'otherwise' clause in the end that is indicated by the condition `True`. Piecewise objects are standard sympy terms that can be integrated into bigger expressions: The ternary operator maps to `sympy.Piecewise` functions, that can be used to introduce branching into the kernel. Piecewise defined functions must give a value for every input, i.e. there must be a 'otherwise' clause in the end that is indicated by the condition `True`. Piecewise objects are standard sympy terms that can be integrated into bigger expressions:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
sp.Piecewise((1.0, src[0,1] > 0), (0.0, True)) + src[1, 0] sp.Piecewise((1.0, src[0,1] > 0), (0.0, True)) + src[1, 0]
``` ```
%% Output %% Output
$${{src}_{E}} + \begin{cases} 1.0 & \text{for}\: {{src}_{N}} > 0 \\0.0 & \text{otherwise} \end{cases}$$
$\displaystyle {src}_{(1,0)} + \begin{cases} 1.0 & \text{for}\: {src}_{(0,1)} > 0 \\0.0 & \text{otherwise} \end{cases}$
⎛⎧1.0 for src_N > 0⎞ ⎛⎧1.0 for src_N > 0⎞
src_E + ⎜⎨ ⎟ src_E + ⎜⎨ ⎟
⎝⎩0.0 otherwise ⎠ ⎝⎩0.0 otherwise ⎠
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Piecewise objects are created by the `kernel` decorator for ternary if-else statements. Piecewise objects are created by the `kernel` decorator for ternary if-else statements.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
@ps.kernel @ps.kernel
def kernel_with_piecewise(): def kernel_with_piecewise():
grad_x @= (src[1, 0] - src[-1, 0]) / 2 if src[-1, 0] > 0 else 0.0 grad_x @= (src[1, 0] - src[-1, 0]) / 2 if src[-1, 0] > 0 else 0.0
kernel_with_piecewise kernel_with_piecewise
``` ```
%% Output %% Output
$$\left [ grad_{x} \leftarrow \begin{cases} \frac{{{src}_{E}}}{2} - \frac{{{src}_{W}}}{2} & \text{for}\: {{src}_{W}} > 0 \\0.0 & \text{otherwise} \end{cases}\right ]$$
$\displaystyle \left[ grad_{x} \leftarrow_{} \begin{cases} \frac{{src}_{(1,0)}}{2} - \frac{{src}_{(-1,0)}}{2} & \text{for}\: {src}_{(-1,0)} > 0 \\0.0 & \text{otherwise} \end{cases}\right]$
⎡ ⎧src_E src_W ⎤ ⎡ ⎧src_E src_W ⎤
⎢ ⎪───── - ───── for src_W > 0⎥ ⎢ ⎪───── - ───── for src_W > 0⎥
⎢gradₓ := ⎨ 2 2 ⎥ ⎢gradₓ := ⎨ 2 2 ⎥
⎢ ⎪ ⎥ ⎢ ⎪ ⎥
⎣ ⎩ 0.0 otherwise ⎦ ⎣ ⎩ 0.0 otherwise ⎦
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### c) Assignment level optimizations using `AssignmentCollection` ### c) Assignment level optimizations using `AssignmentCollection`
When the kernels get larger and more complex, it is helpful to organize the list of assignment into a more structured way. The `AssignmentCollection` offers optimizating transformation on a list of assignments. It holds two assignment lists, one for subexpressions and one for the main assignments. Main assignments are typically those that write to an array. When the kernels get larger and more complex, it is helpful to organize the list of assignment into a more structured way. The `AssignmentCollection` offers optimizating transformation on a list of assignments. It holds two assignment lists, one for subexpressions and one for the main assignments. Main assignments are typically those that write to an array.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
@ps.kernel @ps.kernel
def somewhat_longer_dummy_kernel(s): def somewhat_longer_dummy_kernel(s):
s.a @= src[0, 1] + src[-1, 0] s.a @= src[0, 1] + src[-1, 0]
s.b @= 2 * src[1, 0] + src[0, -1] s.b @= 2 * src[1, 0] + src[0, -1]
s.c @= src[0, 1] + 2 * src[1, 0] + src[-1, 0] + src[0, -1] - src[0,0] s.c @= src[0, 1] + 2 * src[1, 0] + src[-1, 0] + src[0, -1] - src[0,0]
dst[0, 0] @= s.a + s.b + s.c dst[0, 0] @= s.a + s.b + s.c
ac = ps.AssignmentCollection(main_assignments=somewhat_longer_dummy_kernel[-1:], ac = ps.AssignmentCollection(main_assignments=somewhat_longer_dummy_kernel[-1:],
subexpressions=somewhat_longer_dummy_kernel[:-1]) subexpressions=somewhat_longer_dummy_kernel[:-1])
ac ac
``` ```
%% Output %% Output
Equation Collection for dst_C AssignmentCollection: dst_C, <- f(src_C, src_W, src_S, src_N, src_E)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ac.operation_count ac.operation_count
``` ```
%% Output %% Output
{'adds': 8, 'muls': 2, 'divs': 0} {'adds': 8,
'muls': 2,
'divs': 0,
'sqrts': 0,
'fast_sqrts': 0,
'fast_inv_sqrts': 0,
'fast_div': 0}
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The `pystencils.simp` submodule offers several functions to optimize a collection of assignments. The `pystencils.simp` submodule offers several functions to optimize a collection of assignments.
It also offers functionality to group optimization into strategies and evaluate them. It also offers functionality to group optimization into strategies and evaluate them.
In this example we reduce the number of operations by reusing existing subexpressions to get rid of two unnecessary floating point additions. For more information about assignment collections and simplifications see the [demo notebook](demo_assignment_collection.ipynb). In this example we reduce the number of operations by reusing existing subexpressions to get rid of two unnecessary floating point additions. For more information about assignment collections and simplifications see the [demo notebook](demo_assignment_collection.ipynb).
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
opt_ac = ps.simp.subexpression_substitution_in_existing_subexpressions(ac) opt_ac = ps.simp.subexpression_substitution_in_existing_subexpressions(ac)
opt_ac opt_ac
``` ```
%% Output %% Output
Equation Collection for dst_C AssignmentCollection: dst_C, <- f(src_C, src_W, src_S, src_N, src_E)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
opt_ac.operation_count opt_ac.operation_count
``` ```
%% Output %% Output
{'adds': 6, 'muls': 1, 'divs': 0} {'adds': 6,
'muls': 1,
'divs': 0,
'sqrts': 0,
'fast_sqrts': 0,
'fast_inv_sqrts': 0,
'fast_div': 0}
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### d) Ghost layers and iteration region ### d) Ghost layers and iteration region
When creating a kernel with neighbor accesses, *pystencils* automatically restricts the iteration region, such that all accesses are safe. When creating a kernel with neighbor accesses, *pystencils* automatically restricts the iteration region, such that all accesses are safe.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
kernel = ps.create_kernel(ps.Assignment(dst[0,0], src[2, 0] + src[-1, 0])) kernel = ps.create_kernel(ps.Assignment(dst[0,0], src[2, 0] + src[-1, 0]))
ps.show_code(kernel) ps.show_code(kernel)
``` ```
%% Output %% Output
FUNC_PREFIX void kernel(double * RESTRICT fd_dst, double * RESTRICT const fd_src)
{
for (int ctr_0 = 2; ctr_0 < 18; ctr_0 += 1)
{
double * RESTRICT fd_dst_C = 30*ctr_0 + fd_dst;
double * RESTRICT const fd_src_2E = 30*ctr_0 + fd_src + 60;
double * RESTRICT const fd_src_W = 30*ctr_0 + fd_src - 30;
for (int ctr_1 = 2; ctr_1 < 28; ctr_1 += 1)
{
fd_dst_C[ctr_1] = fd_src_2E[ctr_1] + fd_src_W[ctr_1];
}
}
}
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
When no additional ghost layer information is given, *pystencils* looks at all neighboring field accesses and introduces the required number of ghost layers **for all directions**. In the example above the largest neighbor accesses was ``src[2, 0]``, so theoretically we would need 2 ghost layers only the the end of the x coordinate. When no additional ghost layer information is given, *pystencils* looks at all neighboring field accesses and introduces the required number of ghost layers **for all directions**. In the example above the largest neighbor accesses was ``src[2, 0]``, so theoretically we would need 2 ghost layers only the the end of the x coordinate.
By default *pystencils* introduces 2 ghost layers at all borders of the domain. The next cell shows how to change this behavior. Be careful with manual ghost layer specification, wrong values may lead to SEGFAULTs. By default *pystencils* introduces 2 ghost layers at all borders of the domain. The next cell shows how to change this behavior. Be careful with manual ghost layer specification, wrong values may lead to SEGFAULTs.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
gl_spec = [(0, 2), # 0 ghost layers at the left, 2 at the right border gl_spec = [(0, 2), # 0 ghost layers at the left, 2 at the right border
(1, 0)] # 1 ghost layer at the lower y, one at the upper y coordinate (1, 0)] # 1 ghost layer at the lower y, one at the upper y coordinate
kernel = ps.create_kernel(ps.Assignment(dst[0,0], src[2, 0] + src[-1, 0]), ghost_layers=gl_spec) kernel = ps.create_kernel(ps.Assignment(dst[0,0], src[2, 0] + src[-1, 0]), ghost_layers=gl_spec)
ps.show_code(kernel) ps.show_code(kernel)
``` ```
%% Output %% Output
FUNC_PREFIX void kernel(double * RESTRICT fd_dst, double * RESTRICT const fd_src)
{
for (int ctr_0 = 0; ctr_0 < 18; ctr_0 += 1)
{
double * RESTRICT fd_dst_C = 30*ctr_0 + fd_dst;
double * RESTRICT const fd_src_2E = 30*ctr_0 + fd_src + 60;
double * RESTRICT const fd_src_W = 30*ctr_0 + fd_src - 30;
for (int ctr_1 = 1; ctr_1 < 30; ctr_1 += 1)
{
fd_dst_C[ctr_1] = fd_src_2E[ctr_1] + fd_src_W[ctr_1];
}
}
}
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 2 ) Restrictions ## 2 ) Restrictions
### a) Independence Restriction ### a) Independence Restriction
*pystencils* only works for kernels where each array element can be updated independently from all other elements. This restriction ensures that the kernels can be easily parallelized and also be run on the GPU. Trying to define kernels where the results depends on the iteration order, leads to a ValueError. *pystencils* only works for kernels where each array element can be updated independently from all other elements. This restriction ensures that the kernels can be easily parallelized and also be run on the GPU. Trying to define kernels where the results depends on the iteration order, leads to a ValueError.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
invalid_description = [ invalid_description = [
ps.Assignment(dst[1, 0], src[1, 0] + src[-1, 0]), ps.Assignment(dst[1, 0], src[1, 0] + src[-1, 0]),
ps.Assignment(dst[0, 0], src[1, 0] - src[-1, 0]), ps.Assignment(dst[0, 0], src[1, 0] - src[-1, 0]),
] ]
try: try:
invalid_kernel = ps.create_kernel(invalid_description) invalid_kernel = ps.create_kernel(invalid_description)
assert False, "Should never be executed" assert False, "Should never be executed"
except ValueError as e: except ValueError as e:
print(e) print(e)
``` ```
%% Output %% Output
Field dst is written at two different locations Field dst is written at two different locations
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The independence restriction makes sure that the kernel can be safely parallelized by checking the following conditions: If a field is modified inside the kernel, it may only be modified at a single spatial position. In that case the field may also only be read at this position. Fields that are not modified may be read at multiple neighboring positions. The independence restriction makes sure that the kernel can be safely parallelized by checking the following conditions: If a field is modified inside the kernel, it may only be modified at a single spatial position. In that case the field may also only be read at this position. Fields that are not modified may be read at multiple neighboring positions.
Specifically, this rule allows for in-place updates that don't access neighbors. Specifically, this rule allows for in-place updates that don't access neighbors.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
valid_kernel = ps.create_kernel(ps.Assignment(src[0,0], 2*src[0,0] + 42)) valid_kernel = ps.create_kernel(ps.Assignment(src[0,0], 2*src[0,0] + 42))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
If a field stores multiple values per cell, as in the next example, this restriction only applies for accesses with the same index. If a field stores multiple values per cell, as in the next example, this restriction only applies for accesses with the same index.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
v = ps.fields("v(2): double[2D]") v = ps.fields("v(2): double[2D]")
valid_kernel = ps.create_kernel([ps.Assignment(v[0,0](1), 2*v[0,0](1) + 42), valid_kernel = ps.create_kernel([ps.Assignment(v[0,0](1), 2*v[0,0](1) + 42),
ps.Assignment(v[0,1](0), 2*v[1,0](0) + 42)]) ps.Assignment(v[0,1](0), 2*v[0,1](0) + 42)])
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### b) Static Single Assignment Form ### b) Static Single Assignment Form
All assignments that don't write to a field must be in SSA form All assignments that don't write to a field must be in SSA form
1. Each sympy symbol may only occur once as a left-hand-side (fields can be written multiple times) 1. Each sympy symbol may only occur once as a left-hand-side (fields can be written multiple times)
2. A symbol has to be defined before it is used. If it is never defined it is introduced as function parameter 2. A symbol has to be defined before it is used. If it is never defined it is introduced as function parameter
The next cell demonstrates the first SSA restriction: The next cell demonstrates the first SSA restriction:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
@ps.kernel @ps.kernel
def not_allowed(): def not_allowed():
a, b = sp.symbols("a b") a, b = sp.symbols("a b")
a @= src[0, 0] a @= src[0, 0]
b @= a + 3 b @= a + 3
a @= src[-1, 0] a @= src[-1, 0]
dst[0, 0] @= a + b dst[0, 0] @= a + b
try: try:
ps.create_kernel(not_allowed) ps.create_kernel(not_allowed)
assert False assert False
except ValueError as e: except ValueError as e:
print(e) print(e)
``` ```
%% Output %% Output
Assignments not in SSA form, multiple assignments to a Assignments not in SSA form, multiple assignments to a
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
However, for right hand sides that are Field.Accesses this is allowed: Also it is not allowed to write a field at the same location
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
@ps.kernel @ps.kernel
def allowed(): def not_allowed():
dst[0, 0] @= src[0, 1] + src[1, 0] dst[0, 0] @= src[0, 1] + src[1, 0]
dst[0, 0] @= 2 * dst[0, 0] dst[0, 0] @= 2 * dst[0, 0]
ps.create_kernel(allowed)
try:
ps.create_kernel(not_allowed)
assert False
except ValueError as e:
print(e)
``` ```
%% Output %% Output
KernelFunction kernel([<double * RESTRICT fd_dst>, <double * RESTRICT const fd_src>]) Field dst is written twice at the same location
%% Cell type:markdown id: tags:
This situation should be resolved by introducing temporary variables
%% Cell type:code id: tags:
``` python
tmp_var = sp.Symbol("a")
@ps.kernel
def allowed():
tmp_var @= src[0, 1] + src[1, 0]
dst[0, 0] @= 2 * tmp_var
ast = ps.create_kernel(allowed)
ps.show_code(ast)
```
%% Output
......
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import psutil
from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot, cm from matplotlib import pyplot, cm
from pystencils.session import * from pystencils.session import *
from pystencils.boundaries import add_neumann_boundary, Neumann, Dirichlet, BoundaryHandling from pystencils.boundaries import add_neumann_boundary, Neumann, Dirichlet, BoundaryHandling
from pystencils.slicing import slice_from_direction from pystencils.slicing import slice_from_direction
import math import math
import time import time
%matplotlib inline %matplotlib inline
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Test to see if pycuda is installed which is needed to run calculations on the GPU Test to see if cupy is installed which is needed to run calculations on the GPU
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
try: try:
import pycuda import cupy
gpu = True
except ImportError: except ImportError:
pycuda = None gpu = False
print('No pycuda installed') cupy = None
print('No cupy installed')
if pycuda:
import pycuda.gpuarray as gpuarray
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Tutorial 03: Datahandling # Tutorial 03: Datahandling
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This is a tutorial about the `DataHandling` class of pystencils. This class is an abstraction layer to This is a tutorial about the `DataHandling` class of pystencils. This class is an abstraction layer to
- link numpy arrays to pystencils fields - link numpy arrays to pystencils fields
- handle CPU-GPU array transfer, such that one can write code that works on CPU and GPU - handle CPU-GPU array transfer, such that one can write code that works on CPU and GPU
- makes it possible to write MPI parallel simulations to run on distributed-memory clusters using the waLBerla library - makes it possible to write MPI parallel simulations to run on distributed-memory clusters using the waLBerla library
We will look at a small and easy example to demonstrate the usage of `DataHandling` objects. We will define an averaging kernel to every cell of an array, that writes the average of the neighbor cell values to the center. We will look at a small and easy example to demonstrate the usage of `DataHandling` objects. We will define an averaging kernel to every cell of an array, that writes the average of the neighbor cell values to the center.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 1. Manual ## 1. Manual
### 1.1. CPU kernels ### 1.1. CPU kernels
In this first part, we set up a scenario manually without a `DataHandling`. In the next sections we then repeat the same setup with the help of the data handling. In this first part, we set up a scenario manually without a `DataHandling`. In the next sections we then repeat the same setup with the help of the data handling.
One concept of *pystencils* that may be confusing at first, is the differences between pystencils fields and numpy arrays. Fields are used to describe the computation *symbolically* with sympy, while numpy arrays hold the actual values where the computation is executed on. One concept of *pystencils* that may be confusing at first, is the differences between pystencils fields and numpy arrays. Fields are used to describe the computation *symbolically* with sympy, while numpy arrays hold the actual values where the computation is executed on.
One option to create and execute a *pystencils* kernel is listed below. For reasons that become clear later we call this the **variable-field-size workflow**: One option to create and execute a *pystencils* kernel is listed below. For reasons that become clear later we call this the **variable-field-size workflow**:
1. define pystencils fields 1. define pystencils fields
2. use sympy and the pystencils fields to define an update rule, that describes what should be done on *every cell* 2. use sympy and the pystencils fields to define an update rule, that describes what should be done on *every cell*
3. compile the update rule to a real function, that can be called from Python. For each field that was referenced in the symbolic description the function expects a numpy array, passed as named parameter 3. compile the update rule to a real function, that can be called from Python. For each field that was referenced in the symbolic description the function expects a numpy array, passed as named parameter
4. create some numpy arrays with actual data 4. create some numpy arrays with actual data
5. call the kernel - usually many times 5. call the kernel - usually many times
Now, lets see how this actually looks in Python code: Now, lets see how this actually looks in Python code:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# 1. field definitions # 1. field definitions
src_field, dst_field = ps.fields("src, dst:[2D]") src_field, dst_field = ps.fields("src, dst:[2D]")
# 2. define update rule # 2. define update rule
update_rule = [ps.Assignment(lhs=dst_field[0, 0], update_rule = [ps.Assignment(lhs=dst_field[0, 0],
rhs=(src_field[1, 0] + src_field[-1, 0] + rhs=(src_field[1, 0] + src_field[-1, 0] +
src_field[0, 1] + src_field[0, -1]) / 4)] src_field[0, 1] + src_field[0, -1]) / 4)]
# 3. compile update rule to function # 3. compile update rule to function
kernel_function = ps.create_kernel(update_rule).compile() kernel_function = ps.create_kernel(update_rule).compile()
# 4. create numpy arrays and call kernel # 4. create numpy arrays and call kernel
src_arr, dst_arr = np.random.rand(30, 30), np.zeros([30, 30]) src_arr, dst_arr = np.random.rand(30, 30), np.zeros([30, 30])
# 5. call kernel # 5. call kernel
kernel_function(src=src_arr, dst=dst_arr) # names of arguments have to match names passed to ps.fields() kernel_function(src=src_arr, dst=dst_arr) # names of arguments have to match names passed to ps.fields()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This workflow separates the symbolic and the numeric stages very cleanly. The separation also makes it possible to stop after step 3, write the C-code to a file and call the kernel from a C program. Speaking of the C-Code - lets have a look at the generated sources: This workflow separates the symbolic and the numeric stages very cleanly. The separation also makes it possible to stop after step 3, write the C-code to a file and call the kernel from a C program. Speaking of the C-Code - lets have a look at the generated sources:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ps.show_code(kernel_function.ast) ps.show_code(kernel_function.ast)
``` ```
%% Output %% Output
FUNC_PREFIX void kernel(double * _data_dst, double * const _data_src, int64_t const _size_dst_0, int64_t const _size_dst_1, int64_t const _stride_dst_0, int64_t const _stride_dst_1, int64_t const _stride_src_0, int64_t const _stride_src_1)
{
for (int ctr_0 = 1; ctr_0 < _size_dst_0 - 1; ctr_0 += 1)
{
double * _data_dst_00 = _data_dst + _stride_dst_0*ctr_0;
double * const _data_src_01 = _data_src + _stride_src_0*ctr_0 + _stride_src_0;
double * const _data_src_00 = _data_src + _stride_src_0*ctr_0;
double * const _data_src_0m1 = _data_src + _stride_src_0*ctr_0 - _stride_src_0;
for (int ctr_1 = 1; ctr_1 < _size_dst_1 - 1; ctr_1 += 1)
{
_data_dst_00[_stride_dst_1*ctr_1] = 0.25*_data_src_00[_stride_src_1*ctr_1 + _stride_src_1] + 0.25*_data_src_00[_stride_src_1*ctr_1 - _stride_src_1] + 0.25*_data_src_01[_stride_src_1*ctr_1] + 0.25*_data_src_0m1[_stride_src_1*ctr_1];
}
}
}
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Even if it looks very ugly and low-level :) lets look at this code in a bit more detail. The code is generated in a way that it works for different array sizes. The size of the array is passed in the `_size_dst_` variables that specifiy the shape of the array for each dimension. Also, the memory layout (linearization) of the array can be different. That means the array could be stored in row-major or column-major order - if we pass in the array strides correctly the kernel does the right thing. If you're not familiar with the concept of strides check out [this stackoverflow post](https://stackoverflow.com/questions/53097952/how-to-understand-numpy-strides-for-layman) or search in the numpy documentation for strides - C vs Fortran order. Even if it looks very ugly and low-level :) lets look at this code in a bit more detail. The code is generated in a way that it works for different array sizes. The size of the array is passed in the `_size_dst_` variables that specifiy the shape of the array for each dimension. Also, the memory layout (linearization) of the array can be different. That means the array could be stored in row-major or column-major order - if we pass in the array strides correctly the kernel does the right thing. If you're not familiar with the concept of strides check out [this stackoverflow post](https://stackoverflow.com/questions/53097952/how-to-understand-numpy-strides-for-layman) or search in the numpy documentation for strides - C vs Fortran order.
The goal of *pystencils* is to produce the fastest possible code. One technique to do this is to use all available information already on compile time and generate code that is highly adapted to the specific problem. In our case we already know the shape and strides of the arrays we want to apply the kernel to, so we can make use of this information. This idea leads to the **fixed-field-size workflow**. The main difference there is that we define the arrays first and therefore let *pystencils* know about the array shapes and strides, so that it can generate more specific code: The goal of *pystencils* is to produce the fastest possible code. One technique to do this is to use all available information already on compile time and generate code that is highly adapted to the specific problem. In our case we already know the shape and strides of the arrays we want to apply the kernel to, so we can make use of this information. This idea leads to the **fixed-field-size workflow**. The main difference there is that we define the arrays first and therefore let *pystencils* know about the array shapes and strides, so that it can generate more specific code:
1. create numpy arrays that hold your data 1. create numpy arrays that hold your data
2. define pystencils fields, this time telling pystencils already which arrays they correspond to, so that it knows about the size and strides 2. define pystencils fields, this time telling pystencils already which arrays they correspond to, so that it knows about the size and strides
in the other steps nothing has changed in the other steps nothing has changed
3. define the update rule 3. define the update rule
4. compile update rule to kernel 4. compile update rule to kernel
5. run the kernel 5. run the kernel
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# 1. create arrays first # 1. create arrays first
src_arr, dst_arr = np.random.rand(30, 30), np.zeros([30, 30]) src_arr, dst_arr = np.random.rand(30, 30), np.zeros([30, 30])
# 2. define symbolic fields - note the additional parameter that link an array to each field # 2. define symbolic fields - note the additional parameter that link an array to each field
src_field, dst_field = ps.fields("src, dst:[2D]", src=src_arr, dst=dst_arr) src_field, dst_field = ps.fields("src, dst:[2D]", src=src_arr, dst=dst_arr)
# 3. define update rule # 3. define update rule
update_rule = [ps.Assignment(lhs=dst_field[0, 0], update_rule = [ps.Assignment(lhs=dst_field[0, 0],
rhs=(src_field[1, 0] + src_field[-1, 0] + rhs=(src_field[1, 0] + src_field[-1, 0] +
src_field[0, 1] + src_field[0, -1]) / 4)] src_field[0, 1] + src_field[0, -1]) / 4)]
# 4. compile it # 4. compile it
kernel_function = ps.create_kernel(update_rule).compile() kernel_function = ps.create_kernel(update_rule).compile()
# 5. call kernel # 5. call kernel
kernel_function(src=src_arr, dst=dst_arr) # names of arguments have to match names passed to ps.fields() kernel_function(src=src_arr, dst=dst_arr) # names of arguments have to match names passed to ps.fields()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Functionally, both variants are equivalent. We see the difference only when we look at the generated code Functionally, both variants are equivalent. We see the difference only when we look at the generated code
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ps.show_code(kernel_function.ast) ps.show_code(kernel_function.ast)
``` ```
%% Output %% Output
FUNC_PREFIX void kernel(double * _data_dst, double * const _data_src)
{
for (int ctr_0 = 1; ctr_0 < 29; ctr_0 += 1)
{
double * _data_dst_00 = _data_dst + 30*ctr_0;
double * const _data_src_01 = _data_src + 30*ctr_0 + 30;
double * const _data_src_00 = _data_src + 30*ctr_0;
double * const _data_src_0m1 = _data_src + 30*ctr_0 - 30;
for (int ctr_1 = 1; ctr_1 < 29; ctr_1 += 1)
{
_data_dst_00[ctr_1] = 0.25*_data_src_00[ctr_1 + 1] + 0.25*_data_src_00[ctr_1 - 1] + 0.25*_data_src_01[ctr_1] + 0.25*_data_src_0m1[ctr_1];
}
}
}
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Compare this to the code above! It looks much simpler. The reason is that all index computations are already simplified since the exact field sizes and strides are known. This kernel now only works on arrays of the previously specified size. Compare this to the code above! It looks much simpler. The reason is that all index computations are already simplified since the exact field sizes and strides are known. This kernel now only works on arrays of the previously specified size.
Lets try what happens if we use a different array: Lets try what happens if we use a different array:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
src_arr2, dst_arr2 = np.random.rand(40, 40), np.zeros([40, 40]) src_arr2, dst_arr2 = np.random.rand(40, 40), np.zeros([40, 40])
try: try:
kernel_function(src=src_arr2, dst=dst_arr2) kernel_function(src=src_arr2, dst=dst_arr2)
except ValueError as e: except ValueError as e:
print(e) print(e)
``` ```
%% Output %% Output
Wrong shape of array dst. Expected (30, 30) Wrong shape of array dst. Expected (30, 30)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 1.2. GPU simulations ### 1.2. GPU simulations
Let's now jump to a seemingly unrelated topic: running kernels on the GPU. Let's now jump to a seemingly unrelated topic: running kernels on the GPU.
When creating the kernel, an additional parameter `target='gpu'` has to be passed. Also, the compiled kernel cannot be called with numpy arrays directly, but has to be called with `pycuda.gpuarray`s instead. That means, we have to transfer our numpy array to GPU first. From this step we obtain a gpuarray, then we can run the kernel, hopefully multiple times so that the data transfer was worth the time. Finally we transfer the finished result back to CPU: When creating the kernel, an additional parameter `target=ps.Target.GPU` has to be passed. Also, the compiled kernel cannot be called with numpy arrays directly, but has to be called with `cupy.array`s instead. That means, we have to transfer our numpy array to GPU first. From this step we obtain a gpuarray, then we can run the kernel, hopefully multiple times so that the data transfer was worth the time. Finally we transfer the finished result back to CPU:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
if pycuda: if cupy:
kernel_function_gpu = ps.create_kernel(update_rule, target='gpu').compile() config = ps.CreateKernelConfig(target=ps.Target.GPU)
kernel_function_gpu = ps.create_kernel(update_rule, config=config).compile()
# transfer to GPU # transfer to GPU
src_arr_gpu = pycuda.gpuarray.to_gpu(src_arr) src_arr_gpu = cupy.asarray(src_arr)
dst_arr_gpu = pycuda.gpuarray.to_gpu(dst_arr) dst_arr_gpu = cupy.asarray(dst_arr)
# run kernel on GPU, this is done many times in real setups # run kernel on GPU, this is done many times in real setups
kernel_function_gpu(src=src_arr_gpu, dst=dst_arr_gpu) kernel_function_gpu(src=src_arr_gpu, dst=dst_arr_gpu)
# transfer result back to CPU # transfer result back to CPU
dst_arr_gpu.get(dst_arr) dst_arr = dst_arr_gpu.get()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 1.3. Summary: manual way ### 1.3. Summary: manual way
- Don't confuse *pystencils* fields and *numpy* arrays - Don't confuse *pystencils* fields and *numpy* arrays
- fields are symbolic - fields are symbolic
- arrays are numeric - arrays are numeric
- Use the fixed-field-size workflow whenever possible, since code might be faster. Create arrays first, then create fields from arrays - Use the fixed-field-size workflow whenever possible, since code might be faster. Create arrays first, then create fields from arrays
- if we run GPU kernels, arrays have to transferred to the GPU first - if we run GPU kernels, arrays have to transferred to the GPU first
As demonstrated in the examples above we have to define 2 or 3 corresponding objects for each grid: As demonstrated in the examples above we have to define 2 or 3 corresponding objects for each grid:
- symbolic pystencils field - symbolic pystencils field
- numpy array on CPU - numpy array on CPU
- for GPU run also a pycuda.gpuarray to mirror the data on the GPU - for GPU use also cupy to mirror the data on the GPU
Managing these three objects manually is tedious and error-prone. We'll see in the next section how the data handling object takes care of this problem. Managing these three objects manually is tedious and error-prone. We'll see in the next section how the data handling object takes care of this problem.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 2. Introducing the data handling - serial version ## 2. Introducing the data handling - serial version
### 2.1. Example for CPU simulations ### 2.1. Example for CPU simulations
The data handling internally keeps a mapping between symbolic fields and numpy arrays. When we create a field, automatically a corresponding array is allocated as well. Optionally we can also allocate memory on the GPU for the array as well. Lets dive right in and see how our example looks like, when implemented with a data handling. The data handling internally keeps a mapping between symbolic fields and numpy arrays. When we create a field, automatically a corresponding array is allocated as well. Optionally we can also allocate memory on the GPU for the array as well. Lets dive right in and see how our example looks like, when implemented with a data handling.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh = ps.create_data_handling(domain_size=(30, 30)) dh = ps.create_data_handling(domain_size=(30, 30))
# fields are now created using the data handling # fields are now created using the data handling
src_field = dh.add_array('src', values_per_cell=1) src_field = dh.add_array('src', values_per_cell=1)
dst_field = dh.add_array('dst', values_per_cell=1) dst_field = dh.add_array('dst', values_per_cell=1)
# kernel is created just like before # kernel is created just like before
update_rule = [ps.Assignment(lhs=dst_field[0, 0], update_rule = [ps.Assignment(lhs=dst_field[0, 0],
rhs=(src_field[1, 0] + src_field[-1, 0] + src_field[0, 1] + src_field[0, -1]) / 4)] rhs=(src_field[1, 0] + src_field[-1, 0] + src_field[0, 1] + src_field[0, -1]) / 4)]
kernel_function = ps.create_kernel(update_rule).compile() kernel_function = ps.create_kernel(update_rule).compile()
# have a look at the generated code - it uses # have a look at the generated code - it uses
# the fast version where array sizes are compiled-in # the fast version where array sizes are compiled-in
# ps.show_code(kernel_function.ast) # ps.show_code(kernel_function.ast)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The data handling has methods to create fields - but where are the corresponding arrays? The data handling has methods to create fields - but where are the corresponding arrays?
In the serial case you can access them as a member of the data handling, for example to initialize our 'src' array we can write In the serial case you can access them as a member of the data handling, for example to initialize our 'src' array we can write
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
src_arr = dh.cpu_arrays['src'] src_arr = dh.cpu_arrays['src']
src_arr.fill(0.0) src_arr.fill(0.0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This method is nice and easy, but you should not use it if you want your simulation to run on distributed-memory clusters. We'll see why in the last section. So it is good habit to not access the arrays directly but use the data handling to do so. We can, for example, initialize the array also with the following code: This method is nice and easy, but you should not use it if you want your simulation to run on distributed-memory clusters. We'll see why in the last section. So it is good habit to not access the arrays directly but use the data handling to do so. We can, for example, initialize the array also with the following code:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh.fill('src', 0.0) dh.fill('src', 0.0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
To run the kernels with the same code as before, we would also need the arrays. We could do that accessing the `cpu_arrays`: To run the kernels with the same code as before, we would also need the arrays. We could do that accessing the `cpu_arrays`:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
kernel_function(src=dh.cpu_arrays['src'], kernel_function(src=dh.cpu_arrays['src'],
dst=dh.cpu_arrays['dst']) dst=dh.cpu_arrays['dst'])
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
but to be prepared for MPI parallel simulations, again a method of the data handling should be used for this. but to be prepared for MPI parallel simulations, again a method of the data handling should be used for this.
Besides, this method is also simpler to use - since it automatically detects which arrays a kernel uses and passes them in. Besides, this method is also simpler to use - since it automatically detects which arrays a kernel uses and passes them in.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh.run_kernel(kernel_function) dh.run_kernel(kernel_function)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
To access the data read-only instead of using `cpu_arrays` the gather function should be used. To access the data read-only instead of using `cpu_arrays` the gather function should be used.
This function gives you a read-only copy of the domain or part of the domain. This function gives you a read-only copy of the domain or part of the domain.
We will discuss this function later in more detail when we look at MPI parallel simulations. We will discuss this function later in more detail when we look at MPI parallel simulations.
For serial simulations keep in mind that modifying the resulting array does not change your original data! For serial simulations keep in mind that modifying the resulting array does not change your original data!
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
read_only_copy = dh.gather_array('src', ps.make_slice[:, :], ghost_layers=False) read_only_copy = dh.gather_array('src', ps.make_slice[:, :], ghost_layers=False)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 2.2. Example for GPU simulations ### 2.2. Example for GPU simulations
In this section we have a look at GPU simulations using the data handling. Only minimal changes are required. In this section we have a look at GPU simulations using the data handling. Only minimal changes are required.
When creating the data handling we can pass a 'default_target'. This means for every added field an array on the CPU and the GPU is allocated. This is a useful default, for more fine-grained control the `add_array` method also takes additional parameters controlling where the array should be allocated. When creating the data handling we can pass a 'default_target'. This means for every added field an array on the CPU and the GPU is allocated. This is a useful default, for more fine-grained control the `add_array` method also takes additional parameters controlling where the array should be allocated.
Additionally we also need to compile a GPU version of the kernel. Additionally we also need to compile a GPU version of the kernel.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh = ps.create_data_handling(domain_size=(30, 30), default_target='gpu') if gpu is False:
dh = ps.create_data_handling(domain_size=(30, 30), default_target=ps.Target.CPU)
else:
dh = ps.create_data_handling(domain_size=(30, 30), default_target=ps.Target.GPU)
# fields are now created using the data handling # fields are now created using the data handling
src_field = dh.add_array('src', values_per_cell=1) src_field = dh.add_array('src', values_per_cell=1)
dst_field = dh.add_array('dst', values_per_cell=1) dst_field = dh.add_array('dst', values_per_cell=1)
# kernel is created just like before # kernel is created just like before
update_rule = [ps.Assignment(lhs=dst_field[0, 0], update_rule = [ps.Assignment(lhs=dst_field[0, 0],
rhs=(src_field[1, 0] + src_field[-1, 0] + src_field[0, 1] + src_field[0, -1]) / 4)] rhs=(src_field[1, 0] + src_field[-1, 0] + src_field[0, 1] + src_field[0, -1]) / 4)]
kernel_function = ps.create_kernel(update_rule, target=dh.default_target).compile() config = ps.CreateKernelConfig(target=dh.default_target)
kernel_function = ps.create_kernel(update_rule, config=config).compile()
dh.fill('src', 0.0) dh.fill('src', 0.0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The data handling provides function to transfer data between CPU and GPU The data handling provides function to transfer data between CPU and GPU
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh.to_gpu('src') if gpu:
dh.to_gpu('src')
dh.run_kernel(kernel_function) dh.run_kernel(kernel_function)
dh.to_cpu('dst')
if gpu:
dh.to_cpu('dst')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
usually one wants to transfer all fields that have been allocated on CPU and GPU at the same time: usually one wants to transfer all fields that have been allocated on CPU and GPU at the same time:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh.all_to_gpu() dh.all_to_gpu()
dh.run_kernel(kernel_function) dh.run_kernel(kernel_function)
dh.all_to_cpu() dh.all_to_cpu()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
We can always include the `all_to_*` functions in our code, since they do nothing if there are no arrays allocated on the GPU. Thus there is only a single point in the code where we can switch between CPU and GPU version: the `default_target` of the data handling. We can always include the `all_to_*` functions in our code, since they do nothing if there are no arrays allocated on the GPU. Thus there is only a single point in the code where we can switch between CPU and GPU version: the `default_target` of the data handling.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 2.2 Ghost Layers and periodicity ### 2.2 Ghost Layers and periodicity
The data handling can also provide periodic boundary conditions. Therefor the domain is extended by one layer of cells, the so-called ghost layer or halo layer. The data handling can also provide periodic boundary conditions. Therefor the domain is extended by one layer of cells, the so-called ghost layer or halo layer.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
print("Shape of domain ", dh.shape) print("Shape of domain ", dh.shape)
print("Direct access to arrays ", dh.cpu_arrays['src'].shape) print("Direct access to arrays ", dh.cpu_arrays['src'].shape)
print("Gather ", dh.gather_array('src', ghost_layers=True).shape) print("Gather ", dh.gather_array('src', ghost_layers=True).shape)
``` ```
%% Output %% Output
Shape of domain (30, 30) Shape of domain (30, 30)
Direct access to arrays (32, 32) Direct access to arrays (32, 32)
Gather (32, 32) Gather (32, 32)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
So the actual arrays are 2 cells larger than what you asked for. This additional layer is used to copy over the data from the other end of the array, such that for the stencil algorithm effectively the domain is periodic. This copying operation has to be started manually though: So the actual arrays are 2 cells larger than what you asked for. This additional layer is used to copy over the data from the other end of the array, such that for the stencil algorithm effectively the domain is periodic. This copying operation has to be started manually though:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh = ps.create_data_handling((4, 4), periodicity=(True, False)) dh = ps.create_data_handling((4, 4), periodicity=(True, False))
dh.add_array("src") dh.add_array("src")
# get copy function # get copy function
copy_fct = dh.synchronization_function(['src']) copy_fct = dh.synchronization_function(['src'])
dh.fill('src', 0.0, ghost_layers=True) dh.fill('src', 0.0, ghost_layers=True)
dh.fill('src', 3.0, ghost_layers=False) dh.fill('src', 3.0, ghost_layers=False)
before_sync = dh.gather_array('src', ghost_layers=True).copy() before_sync = dh.gather_array('src', ghost_layers=True).copy()
copy_fct() # copy over to get periodicity in x direction copy_fct() # copy over to get periodicity in x direction
after_sync = dh.gather_array('src', ghost_layers=True).copy() after_sync = dh.gather_array('src', ghost_layers=True).copy()
plt.subplot(1,2,1) plt.subplot(1,2,1)
plt.scalar_field(before_sync); plt.scalar_field(before_sync);
plt.title("Before") plt.title("Before")
plt.subplot(1,2,2) plt.subplot(1,2,2)
plt.scalar_field(after_sync); plt.scalar_field(after_sync);
plt.title("After"); plt.title("After");
``` ```
%% Output %% Output
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 3. Going (MPI) parallel - the parallel data handling ## 3. Going (MPI) parallel - the parallel data handling
### 3.1. Conceptual overview ### 3.1. Conceptual overview
To run MPI parallel simulations the waLBerla framework is used. waLBerla has to be compiled against your local MPI library and thus is a bit hard to install. We suggest to use the version shipped with conda for testing. For production, when you want to run on a cluster the best option is to build waLBerla yourself against the MPI library that is installed on your cluster. To run MPI parallel simulations the waLBerla framework is used. waLBerla has to be compiled against your local MPI library and thus is a bit hard to install. We suggest to use the version shipped with conda for testing. For production, when you want to run on a cluster the best option is to build waLBerla yourself against the MPI library that is installed on your cluster.
Now lets have a look on how to write code that runs MPI parallel. Now lets have a look on how to write code that runs MPI parallel.
Since the data is distributed, we don't have access to the full array any more but only to the part that is stored locally. The domain is split into so called blocks, where one process might get one (or sometimes multiple) blocks. To do anything with the local part of the data we iterate over the **local** blocks to get the contents as numpy arrays. The blocks returned in the loop differ from process to process. Since the data is distributed, we don't have access to the full array any more but only to the part that is stored locally. The domain is split into so called blocks, where one process might get one (or sometimes multiple) blocks. To do anything with the local part of the data we iterate over the **local** blocks to get the contents as numpy arrays. The blocks returned in the loop differ from process to process.
Copy the following snippet to a python file and run with multiple processes e.g.: Copy the following snippet to a python file and run with multiple processes e.g.:
``mpirun -np 4 myscript.py`` you will see that there are as many blocks as processes. ``mpirun -np 4 myscript.py`` you will see that there are as many blocks as processes.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh = ps.create_data_handling(domain_size=(30, 30), parallel=True) dh = ps.create_data_handling(domain_size=(30, 30), parallel=True)
field = dh.add_array('field') field = dh.add_array('field')
for block in dh.iterate(): for block in dh.iterate():
# offset is in global coordinates, where first inner cell has coordiante (0,0) # offset is in global coordinates, where first inner cell has coordiante (0,0)
# and ghost layers have negative coordinates # and ghost layers have negative coordinates
print(block.offset, block['field'].shape) print(block.offset, block['field'].shape)
# use offset to create a local array 'my_data' for the part of the domain # use offset to create a local array 'my_data' for the part of the domain
#np.copyto(block[field.name], my_data) #np.copyto(block[field.name], my_data)
``` ```
%% Output %% Output
(-1, -1) (32, 32) (-1, -1) (32, 32)
[0][PROGRESS]------(0.000 sec) Initializing SetupBlockForest:
[0] - AABB: [ <0,0,0>, <30,30,1> ]
[0] - forest size (root blocks / blocks on the initial grid): 1 x 1 x 1
[0] - periodicity: false x false x false
[0][PROGRESS]------(0.000 sec) Initializing SetupBlockForest: Allocating root blocks ...
[0][PROGRESS]------(0.000 sec) Initializing SetupBlockForest: Setting up neighborhood information for each block (with respect to periodicity) ...
[0][PROGRESS]------(0.000 sec) Initializing SetupBlockForest: Assigning workload, memory requirements, and SUIDs to blocks ...
[0][PROGRESS]------(0.000 sec) Initializing SetupBlockForest: finished!
[0] The following block structure has been created:
[0] - AABB: [ <0,0,0>, <30,30,1> ]
[0] - initial decomposition: 1 x 1 x 1 (= forest size)
[0] - periodicity: false x false x false
[0] - number of blocks discarded from the initial grid: 0 (= 0 %)
[0] - number of levels: 1
[0] - tree ID digits: 1 (-> block ID bytes = 1)
[0] - total number of blocks: 1
[0] - blocks have not yet been distributed to processes
[0][PROGRESS]------(0.000 sec) Balancing SetupBlockForest: Creating a process distribution for 1 process(es) ...
[0][PROGRESS]------(0.000 sec) Balancing SetupBlockForest: process distribution to 1 process(es) finished!
[0] - number of worker processes: 1
[0] - number of empty buffer processes: 0
[0] - buffer processes are inserted into the process network: no
[0] The resulting block structure looks like as follows:
[0] - AABB: [ <0,0,0>, <30,30,1> ]
[0] - initial decomposition: 1 x 1 x 1 (= forest size)
[0] - periodicity: false x false x false
[0] - number of blocks discarded from the initial grid: 0 (= 0 %)
[0] - number of levels: 1
[0] - tree ID digits: 1 (-> block ID bytes = 1)
[0] - total number of blocks: 1
[0] - number of processes: 1 (1 worker process(es) / 0 empty buffer process(es))
[0] - buffer processes are inserted into the process network: no
[0] - process ID bytes: 0
[0] - blocks/memory/workload per process:
[0] + blocks:
[0] - min = 1
[0] - max = 1
[0] - avg = 1
[0] - stdDev = 0
[0] - relStdDev = 0
[0] + memory:
[0] - min = 1
[0] - max = 1
[0] - avg = 1
[0] - stdDev = 0
[0] - relStdDev = 0
[0] + workload:
[0] - min = 1
[0] - max = 1
[0] - avg = 1
[0] - stdDev = 0
[0] - relStdDev = 0
[0][PROGRESS]------(0.000 sec) Adding block data ("cell bounding box")
[0][PROGRESS]------(0.000 sec) Adding block data ("field")
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
To get some more interesting results here in the notebook we put multiple blocks onto our single notebook process. This makes not much sense for real simulations, but for testing and demonstration purposes this is useful. To get some more interesting results here in the notebook we put multiple blocks onto our single notebook process. This makes not much sense for real simulations, but for testing and demonstration purposes this is useful.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from waLBerla import createUniformBlockGrid from waLBerla import createUniformBlockGrid
from pystencils.datahandling import ParallelDataHandling from pystencils.datahandling import ParallelDataHandling
blocks = createUniformBlockGrid(blocks=(2,1,2), cellsPerBlock=(20, 10, 20), blocks = createUniformBlockGrid(blocks=(2,1,2), cellsPerBlock=(20, 10, 20),
oneBlockPerProcess=False, periodic=(1, 0, 0)) oneBlockPerProcess=False, periodic=(1, 0, 0))
dh = ParallelDataHandling(blocks) dh = ParallelDataHandling(blocks)
field = dh.add_array('field') field = dh.add_array('field')
for block in dh.iterate(): for block in dh.iterate():
print(block.offset, block['field'].shape) print(block.offset, block['field'].shape)
``` ```
%% Output %% Output
(-1, -1, -1) (22, 12, 22) (-1, -1, -1)[0][PROGRESS]------(0.025 sec) Initializing SetupBlockForest:
[0] - AABB: [ <0,0,0>, <40,10,40> ]
[0] - forest size (root blocks / blocks on the initial grid): 2 x 1 x 2
[0] - periodicity: true x false x false
(22, 12, 22)
(19, -1, -1) (22, 12, 22) (19, -1, -1) (22, 12, 22)
(-1, -1, 19) (22, 12, 22) (-1, -1, 19) (22, 12, 22)
(19, -1, 19) (22, 12, 22) (19, -1, 19) (22, 12, 22)
[0][PROGRESS]------(0.025 sec) Initializing SetupBlockForest: Allocating root blocks ...
[0][PROGRESS]------(0.025 sec) Initializing SetupBlockForest: Setting up neighborhood information for each block (with respect to periodicity) ...
[0][PROGRESS]------(0.025 sec) Initializing SetupBlockForest: Assigning workload, memory requirements, and SUIDs to blocks ...
[0][PROGRESS]------(0.025 sec) Initializing SetupBlockForest: finished!
[0] The following block structure has been created:
[0] - AABB: [ <0,0,0>, <40,10,40> ]
[0] - initial decomposition: 2 x 1 x 2 (= forest size)
[0] - periodicity: true x false x false
[0] - number of blocks discarded from the initial grid: 0 (= 0 %)
[0] - number of levels: 1
[0] - tree ID digits: 3 (-> block ID bytes = 1)
[0] - total number of blocks: 4
[0] - blocks have not yet been distributed to processes
[0][PROGRESS]------(0.025 sec) Balancing SetupBlockForest: Creating a process distribution for 1 process(es) ...
[0][PROGRESS]------(0.025 sec) Balancing SetupBlockForest: process distribution to 1 process(es) finished!
[0] - number of worker processes: 1
[0] - number of empty buffer processes: 0
[0] - buffer processes are inserted into the process network: no
[0] The resulting block structure looks like as follows:
[0] - AABB: [ <0,0,0>, <40,10,40> ]
[0] - initial decomposition: 2 x 1 x 2 (= forest size)
[0] - periodicity: true x false x false
[0] - number of blocks discarded from the initial grid: 0 (= 0 %)
[0] - number of levels: 1
[0] - tree ID digits: 3 (-> block ID bytes = 1)
[0] - total number of blocks: 4
[0] - number of processes: 1 (1 worker process(es) / 0 empty buffer process(es))
[0] - buffer processes are inserted into the process network: no
[0] - process ID bytes: 0
[0] - blocks/memory/workload per process:
[0] + blocks:
[0] - min = 4
[0] - max = 4
[0] - avg = 4
[0] - stdDev = 0
[0] - relStdDev = 0
[0] + memory:
[0] - min = 4
[0] - max = 4
[0] - avg = 4
[0] - stdDev = 0
[0] - relStdDev = 0
[0] + workload:
[0] - min = 4
[0] - max = 4
[0] - avg = 4
[0] - stdDev = 0
[0] - relStdDev = 0
[0][PROGRESS]------(0.025 sec) Adding block data ("cell bounding box")
[0][PROGRESS]------(0.026 sec) Adding block data ("field")
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Now we see that we have four blocks with (20, 10, 20) block each, and the global domain is (40, 10, 40) big. Now we see that we have four blocks with (20, 10, 20) block each, and the global domain is (40, 10, 40) big.
All subblock also have a ghost layer around them, which is used to synchronize with their neighboring blocks (over the network). For ghost layer synchronization the same `synchronization_function` is used that we used above for periodic boundaries, because copying between blocks and copying the ghost layer for periodicity uses the same mechanism. All subblock also have a ghost layer around them, which is used to synchronize with their neighboring blocks (over the network). For ghost layer synchronization the same `synchronization_function` is used that we used above for periodic boundaries, because copying between blocks and copying the ghost layer for periodicity uses the same mechanism.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh.gather_array('field').shape dh.gather_array('field').shape
``` ```
%% Output %% Output
$$\left ( 40, \quad 10, \quad 40\right )$$ $\displaystyle \left( 40, \ 10, \ 40\right)$
(40, 10, 40) (40, 10, 40)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 3.2. Parallel example ### 3.2. Parallel example
To illustrate this in more detail, lets run a simple kernel on a parallel domain. waLBerla can handle 3D domains only, so we choose a z-size of 1. To illustrate this in more detail, lets run a simple kernel on a parallel domain. waLBerla can handle 3D domains only, so we choose a z-size of 1.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
blocks = createUniformBlockGrid(blocks=(2,4,1), cellsPerBlock=(20, 10, 1), blocks = createUniformBlockGrid(blocks=(2,4,1), cellsPerBlock=(20, 10, 1),
oneBlockPerProcess=False, periodic=(1, 1, 0)) oneBlockPerProcess=False, periodic=(1, 1, 0))
dh = ParallelDataHandling(blocks, dim=2) dh = ParallelDataHandling(blocks, dim=2)
src_field = dh.add_array('src') src_field = dh.add_array('src')
dst_field = dh.add_array('dst') dst_field = dh.add_array('dst')
update_rule = [ps.Assignment(lhs=dst_field[0, 0 ], update_rule = [ps.Assignment(lhs=dst_field[0, 0 ],
rhs=(src_field[1, 0] + src_field[-1, 0] + rhs=(src_field[1, 0] + src_field[-1, 0] +
src_field[0, 1] + src_field[0, -1]) / 4)] src_field[0, 1] + src_field[0, -1]) / 4)]
# 3. compile update rule to function # 3. compile update rule to function
kernel_function = ps.create_kernel(update_rule).compile() kernel_function = ps.create_kernel(update_rule).compile()
``` ```
%% Output
[0][PROGRESS]------(0.119 sec) Initializing SetupBlockForest:
[0] - AABB: [ <0,0,0>, <40,40,1> ]
[0] - forest size (root blocks / blocks on the initial grid): 2 x 4 x 1
[0] - periodicity: true x true x false
[0][PROGRESS]------(0.119 sec) Initializing SetupBlockForest: Allocating root blocks ...
[0][PROGRESS]------(0.119 sec) Initializing SetupBlockForest: Setting up neighborhood information for each block (with respect to periodicity) ...
[0][PROGRESS]------(0.119 sec) Initializing SetupBlockForest: Assigning workload, memory requirements, and SUIDs to blocks ...
[0][PROGRESS]------(0.119 sec) Initializing SetupBlockForest: finished!
[0] The following block structure has been created:
[0] - AABB: [ <0,0,0>, <40,40,1> ]
[0] - initial decomposition: 2 x 4 x 1 (= forest size)
[0] - periodicity: true x true x false
[0] - number of blocks discarded from the initial grid: 0 (= 0 %)
[0] - number of levels: 1
[0] - tree ID digits: 4 (-> block ID bytes = 1)
[0] - total number of blocks: 8
[0] - blocks have not yet been distributed to processes
[0][PROGRESS]------(0.119 sec) Balancing SetupBlockForest: Creating a process distribution for 1 process(es) ...
[0][PROGRESS]------(0.119 sec) Balancing SetupBlockForest: process distribution to 1 process(es) finished!
[0] - number of worker processes: 1
[0] - number of empty buffer processes: 0
[0] - buffer processes are inserted into the process network: no
[0] The resulting block structure looks like as follows:
[0] - AABB: [ <0,0,0>, <40,40,1> ]
[0] - initial decomposition: 2 x 4 x 1 (= forest size)
[0] - periodicity: true x true x false
[0] - number of blocks discarded from the initial grid: 0 (= 0 %)
[0] - number of levels: 1
[0] - tree ID digits: 4 (-> block ID bytes = 1)
[0] - total number of blocks: 8
[0] - number of processes: 1 (1 worker process(es) / 0 empty buffer process(es))
[0] - buffer processes are inserted into the process network: no
[0] - process ID bytes: 0
[0] - blocks/memory/workload per process:
[0] + blocks:
[0] - min = 8
[0] - max = 8
[0] - avg = 8
[0] - stdDev = 0
[0] - relStdDev = 0
[0] + memory:
[0] - min = 8
[0] - max = 8
[0] - avg = 8
[0] - stdDev = 0
[0] - relStdDev = 0
[0] + workload:
[0] - min = 8
[0] - max = 8
[0] - avg = 8
[0] - stdDev = 0
[0] - relStdDev = 0
[0][PROGRESS]------(0.119 sec) Adding block data ("cell bounding box")
[0][PROGRESS]------(0.119 sec) Adding block data ("src")
[0][PROGRESS]------(0.120 sec) Adding block data ("dst")
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Now lets initialize the arrays. To do this we can get arrays (meshgrids) from the block with the coordinates of the local cells. We use this to initialize a circular shape. Now lets initialize the arrays. To do this we can get arrays (meshgrids) from the block with the coordinates of the local cells. We use this to initialize a circular shape.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh.fill('src', 0.0) dh.fill('src', 0.0)
for block in dh.iterate(ghost_layers=False, inner_ghost_layers=False): for block in dh.iterate(ghost_layers=False, inner_ghost_layers=False):
x, y = block.midpoint_arrays x, y = block.midpoint_arrays
inside_sphere = (x -20)**2 + (y-25)**2 < 8 ** 2 inside_sphere = (x -20)**2 + (y-25)**2 < 8 ** 2
block['src'][inside_sphere] = 1.0 block['src'][inside_sphere] = 1.0
plt.scalar_field( dh.gather_array('src') ); plt.scalar_field( dh.gather_array('src') );
``` ```
%% Output %% Output
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Now we can run our compute kernel on this data as usual. We just have to make sure that the field is synchronized after every step, i.e. that the ghost layers are correctly updated. Now we can run our compute kernel on this data as usual. We just have to make sure that the field is synchronized after every step, i.e. that the ghost layers are correctly updated.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
sync = dh.synchronization_function(['src']) sync = dh.synchronization_function(['src'])
for i in range(40): for i in range(40):
sync() sync()
dh.run_kernel(kernel_function) dh.run_kernel(kernel_function)
dh.swap('src', 'dst') dh.swap('src', 'dst')
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
plt.scalar_field( dh.gather_array('src') ); plt.scalar_field( dh.gather_array('src') );
``` ```
%% Output %% Output
......
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pystencils.session import * from pystencils.session import *
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Demo: Assignment collections and simplification # Demo: Assignment collections and simplification
## Assignment collections ## Assignment collections
The assignment collection class helps to formulate and simplify assignments for numerical kernels. The assignment collection class helps to formulate and simplify assignments for numerical kernels.
An ``AssignmentCollection`` is an ordered collection of assignments, together with an optional ordered collection of subexpressions, that are required to evaluate the main assignments. There are various simplification rules available that operate on ``AssignmentCollection``s. An ``AssignmentCollection`` is an ordered collection of assignments, together with an optional ordered collection of subexpressions, that are required to evaluate the main assignments. There are various simplification rules available that operate on ``AssignmentCollection``s.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
We start by defining some stencil update rule. Here we also use the *pystencils* ``Field``, note however that the assignment collection module works purely on the *sympy* level. We start by defining some stencil update rule. Here we also use the *pystencils* ``Field``, note however that the assignment collection module works purely on the *sympy* level.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
a,b,c = sp.symbols("a b c") a,b,c = sp.symbols("a b c")
f = ps.fields("f(2) : [2D]") f = ps.fields("f(2) : [2D]")
g = ps.fields("g(2) : [2D]")
a1 = ps.Assignment(f[0,0](1), (a**2 +b) * f[0,1] + \ a1 = ps.Assignment(g[0,0](1), (a**2 +b) * f[0,1] + \
(a**2 - c) * f[1,0] + \ (a**2 - c) * f[1,0] + \
(a**2 - 2*c) * f[-1,0] + \ (a**2 - 2*c) * f[-1,0] + \
(a**2) * f[0, -1]) (a**2) * f[0, -1])
a2 = ps.Assignment(f[0,0](0), (c**2 +b) * f[0,1] + \ a2 = ps.Assignment(g[0,0](0), (c**2 +b) * f[0,1] + \
(c**2 - c) * f[1,0] + \ (c**2 - c) * f[1,0] + \
(c**2 - 2*c) * f[-1,0] + \ (c**2 - 2*c) * f[-1,0] + \
(c**2 - a**2) * f[0, -1]) (c**2 - a**2) * f[0, -1])
ac = ps.AssignmentCollection([a1, a2], subexpressions=[]) ac = ps.AssignmentCollection([a1, a2], subexpressions=[])
ac ac
``` ```
%% Output %% Output
Equation Collection for f_C^1,f_C^0 AssignmentCollection: g_C^0, g_C^1 <- f(f_N^0, b, f_S^0, f_E^0, a, f_W^0, c)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
*sympy* operations can be applied on an assignment collection: In this example we first expand the collection, then look for common subexpressions. *sympy* operations can be applied on an assignment collection: In this example we first expand the collection, then look for common subexpressions.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
expand_all = ps.simp.apply_to_all_assignments(sp.expand) expand_all = ps.simp.apply_to_all_assignments(sp.expand)
expandedEc = expand_all(ac) expandedEc = expand_all(ac)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ac_cse = ps.simp.sympy_cse(expandedEc) ac_cse = ps.simp.sympy_cse(expandedEc)
ac_cse ac_cse
``` ```
%% Output %% Output
Equation Collection for f_C^1,f_C^0 AssignmentCollection: g_C^0, g_C^1 <- f(f_N^0, b, f_S^0, f_E^0, a, f_W^0, c)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Symbols occuring in assignment collections are classified into 3 categories: Symbols occuring in assignment collections are classified into 3 categories:
- ``free_symbols``: symbols that occur in right-hand-sides but never on left-hand-sides - ``free_symbols``: symbols that occur in right-hand-sides but never on left-hand-sides
- ``bound_symbols``: symbols that occur on left-hand-sides - ``bound_symbols``: symbols that occur on left-hand-sides
- ``defined_symbols``: symbols that occur on left-hand-sides of a main assignment - ``defined_symbols``: symbols that occur on left-hand-sides of a main assignment
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ac_cse.free_symbols ac_cse.free_symbols
``` ```
%% Output %% Output
$$\left\{{{f}_{E}^{0}}, {{f}_{N}^{0}}, {{f}_{S}^{0}}, {{f}_{W}^{0}}, a, b, c\right\}$$ $\displaystyle \left\{{f}_{(1,0)}^{0}, {f}_{(0,1)}^{0}, {f}_{(0,-1)}^{0}, {f}_{(-1,0)}^{0}, a, b, c\right\}$
set([f_E__0, f_N__0, f_S__0, f_W__0, a, b, c]) {f_E__0, f_N__0, f_S__0, f_W__0, a, b, c}
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ac_cse.bound_symbols ac_cse.bound_symbols
``` ```
%% Output %% Output
$$\left\{{{f}_{C}^{0}}, {{f}_{C}^{1}}, \xi_{0}, \xi_{1}, \xi_{2}, \xi_{3}\right\}$$ $\displaystyle \left\{{g}_{(0,0)}^{0}, {g}_{(0,0)}^{1}, \xi_{0}, \xi_{1}, \xi_{2}, \xi_{3}\right\}$
set([f_C__0, f_C__1, ξ₀, ξ₁, ξ₂, ξ₃]) {g_C__0, g_C__1, ξ₀, ξ₁, ξ₂, ξ₃}
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ac_cse.defined_symbols ac_cse.defined_symbols
``` ```
%% Output %% Output
$$\left\{{{f}_{C}^{0}}, {{f}_{C}^{1}}\right\}$$ $\displaystyle \left\{{g}_{(0,0)}^{0}, {g}_{(0,0)}^{1}\right\}$
set([f_C__0, f_C__1]) {g_C__0, g_C__1}
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Assignment collections can be splitted up, and merged together. For splitting, a list of symbols that occur on the left-hand-side in the main assignments has to be passed. The returned assignment collection only contains these main assignments together with all necessary subexpressions. Assignment collections can be splitted up, and merged together. For splitting, a list of symbols that occur on the left-hand-side in the main assignments has to be passed. The returned assignment collection only contains these main assignments together with all necessary subexpressions.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ac_f0 = ac_cse.new_filtered([f(0)]) ac_f0 = ac_cse.new_filtered([g(0)])
ac_f1 = ac_cse.new_filtered([f(1)]) ac_f1 = ac_cse.new_filtered([g(1)])
ac_f1 ac_f1
``` ```
%% Output %% Output
Equation Collection for f_C^1 AssignmentCollection: g_C^1, <- f(f_N^0, b, f_S^0, f_E^0, a, f_W^0, c)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Note here that $\xi_4$ is no longer part of the subexpressions, since it is not used in the main assignment of $f_C^1$. Note here that $\xi_4$ is no longer part of the subexpressions, since it is not used in the main assignment of $f_C^1$.
If we merge both collections together, we end up with the original collection. If we merge both collections together, we end up with the original collection.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ac_f0.new_merged(ac_f1) ac_f0.new_merged(ac_f1)
``` ```
%% Output %% Output
Equation Collection for f_C^0,f_C^1 AssignmentCollection: g_C^0, g_C^1 <- f(f_N^0, b, f_S^0, f_E^0, a, f_W^0, c)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
There is also a method that inserts all subexpressions into the main assignments. This is the inverse operation of common subexpression elimination. There is also a method that inserts all subexpressions into the main assignments. This is the inverse operation of common subexpression elimination.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
assert sp.simplify(ac_f0.new_without_subexpressions().main_assignments[0].rhs - a2.rhs) == 0 assert sp.simplify(ac_f0.new_without_subexpressions().main_assignments[0].rhs - a2.rhs) == 0
ac_f0.new_without_subexpressions() ac_f0.new_without_subexpressions()
``` ```
%% Output %% Output
Equation Collection for f_C^0 AssignmentCollection: g_C^0, <- f(f_N^0, b, f_S^0, f_E^0, a, f_W^0, c)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
To evaluate an assignment collection, use the ``lambdify`` method. It is very similar to *sympy*s ``lambdify`` function. To evaluate an assignment collection, use the ``lambdify`` method. It is very similar to *sympy*s ``lambdify`` function.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
evalFct = ac_cse.lambdify([f[0,1], f[1,0]], # new parameters of returned function evalFct = ac_cse.lambdify([f[0,1], f[1,0]], # new parameters of returned function
fixed_symbols={a:1, b:2, c:3, f[0,-1]: 4, f[-1,0]: 5}) # fix values of other symbols fixed_symbols={a:1, b:2, c:3, f[0,-1]: 4, f[-1,0]: 5}) # fix values of other symbols
evalFct(2,1) evalFct(2,1)
``` ```
%% Output %% Output
$$\left \{ {{f}_{C}^{0}} : 75, \quad {{f}_{C}^{1}} : -17\right \}$$ $\displaystyle \left\{ {g}_{(0,0)}^{0} : 75, \ {g}_{(0,0)}^{1} : -17\right\}$
{f_C__0: 75, f_C__1: -17} {g_C__0: 75, g_C__1: -17}
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
lambdify is rather slow for evaluation. The intended way to evaluate an assignment collection is *pystencils* i.e. create a fast kernel, that applies the update at every site of a structured grid. The collection can be directly passed to the `create_kernel` function. lambdify is rather slow for evaluation. The intended way to evaluate an assignment collection is *pystencils* i.e. create a fast kernel, that applies the update at every site of a structured grid. The collection can be directly passed to the `create_kernel` function.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
func = ps.create_kernel(ac_cse).compile() func = ps.create_kernel(ac_cse).compile()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Simplification Strategies ## Simplification Strategies
In above examples, we already applied simplification rules to assignment collections. Simplification rules are functions that take, as a single argument, an assignment collection and return an modified/simplified copy of it. The ``SimplificationStrategy`` class holds a list of simplification rules and can apply all of them in the specified order. Additionally it provides useful printing and reporting functions. In above examples, we already applied simplification rules to assignment collections. Simplification rules are functions that take, as a single argument, an assignment collection and return an modified/simplified copy of it. The ``SimplificationStrategy`` class holds a list of simplification rules and can apply all of them in the specified order. Additionally it provides useful printing and reporting functions.
We start by creating a simplification strategy, consisting of the expand and CSE simplifications we have already applied above: We start by creating a simplification strategy, consisting of the expand and CSE simplifications we have already applied above:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
strategy = ps.simp.SimplificationStrategy() strategy = ps.simp.SimplificationStrategy()
strategy.add(ps.simp.apply_to_all_assignments(sp.expand)) strategy.add(ps.simp.apply_to_all_assignments(sp.expand))
strategy.add(ps.simp.sympy_cse) strategy.add(ps.simp.sympy_cse)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This strategy can be applied to any assignment collection: This strategy can be applied to any assignment collection:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
strategy(ac) strategy(ac)
``` ```
%% Output %% Output
Equation Collection for f_C^1,f_C^0 AssignmentCollection: g_C^0, g_C^1 <- f(f_N^0, b, f_S^0, f_E^0, a, f_W^0, c)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The strategy can also print the simplification results at each stage. The strategy can also print the simplification results at each stage.
The report contains information about the number of operations after each simplification as well as the runtime of each simplification routine. The report contains information about the number of operations after each simplification as well as the runtime of each simplification routine.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
strategy.create_simplification_report(ac) strategy.create_simplification_report(ac)
``` ```
%% Output %% Output
<pystencils.simp.simplificationstrategy.SimplificationStrategy.create_simplification_report.<locals>.Report at 0x7f9be404fda0> <pystencils.simp.simplificationstrategy.SimplificationStrategy.create_simplification_report.<locals>.Report at 0x147de3e90>
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The strategy can also print the full collection after each simplification... The strategy can also print the full collection after each simplification...
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
strategy.show_intermediate_results(ac) strategy.show_intermediate_results(ac)
``` ```
%% Output %% Output
<pystencils.simp.simplificationstrategy.SimplificationStrategy.show_intermediate_results.<locals>.IntermediateResults at 0x7f9bad688dd8> <pystencils.simp.simplificationstrategy.SimplificationStrategy.show_intermediate_results.<locals>.IntermediateResults at 0x147e09c90>
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
... or only specific assignments for better readability ... or only specific assignments for better readability
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
strategy.show_intermediate_results(ac, symbols=[f(1)]) strategy.show_intermediate_results(ac, symbols=[g(1)])
``` ```
%% Output %% Output
<pystencils.simp.simplificationstrategy.SimplificationStrategy.show_intermediate_results.<locals>.IntermediateResults at 0x7f9bad688b00> <pystencils.simp.simplificationstrategy.SimplificationStrategy.show_intermediate_results.<locals>.IntermediateResults at 0x1265a1b90>
%% Cell type:code id: tags:
``` python
```
......
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pystencils.session import * from pystencils.session import *
import timeit import timeit
%load_ext Cython %load_ext Cython
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Demo: Benchmark numpy, cython, pystencils # Demo: Benchmark numpy, Cython, pystencils
In this benchmark we compare different ways of implementing a simple stencil kernel in Python. In this notebook we compare and benchmark different ways of implementing a simple stencil kernel in Python.
The benchmark kernel computes the average of the four neighbors in 2D and stores in a second array. To prevent out-of-bounds accesses, we skip the cells at the border and compute values only in the range `[1:-1, 1:-1]` Our simple example computes the average of the four neighbors in 2D and stores it in a second array. To prevent out-of-bounds accesses, we skip the cells at the border and compute values only in the range `[1:-1, 1:-1]`
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Implementations ## Implementations
The first implementation is a pure Python implementation: The first implementation is a pure Python implementation:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def avg_pure_python(input_arr, output_arr): def avg_pure_python(src, dst):
for x in range(1, input_arr.shape[0] - 1): for x in range(1, src.shape[0] - 1):
for y in range(1, input_arr.shape[1] - 1): for y in range(1, src.shape[1] - 1):
output_arr[x, y] = (input_arr[x + 1, y] + input_arr[x - 1, y] + dst[x, y] = (src[x + 1, y] + src[x - 1, y] +
input_arr[x, y + 1] + input_arr[x, y - 1]) / 4 src[x, y + 1] + src[x, y - 1]) / 4
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Obviously, this will be a rather slow version, since the loops are written directly in Python. Obviously, this will be a rather slow version, since the loops are written directly in Python.
Next, we use *numpy* functions to delegate the looping to numpy. The first version uses the `roll` function to shift the array by one element in each direction. This version has to allocate a new array for each accessed neighbor. Next, we use *numpy* functions to delegate the looping to numpy. The first version uses the `roll` function to shift the array by one element in each direction. This version has to allocate a new array for each accessed neighbor.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def avg_numpy_roll(input_arr, output_arr): def avg_numpy_roll(src, dst):
neighbors = [np.roll(input_arr, axis=a, shift=s) for a in (0, 1) for s in (-1, 1)] neighbors = [np.roll(src, axis=a, shift=s) for a in (0, 1) for s in (-1, 1)]
np.divide(sum(neighbors), 4, out=output_arr) np.divide(sum(neighbors), 4, out=dst)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Using views, we can get rid of the additional copies: Using views, we can get rid of the additional copies:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def avg_numpy_slice(input_arr, output_arr): def avg_numpy_slice(src, dst):
output_arr[1:-1, 1:-1] = input_arr[2:, 1:-1] + input_arr[:-2, 1:-1] + \ dst[1:-1, 1:-1] = src[2:, 1:-1] + src[:-2, 1:-1] + \
input_arr[1:-1, 2:] + input_arr[1:-1, :-2] src[1:-1, 2:] + src[1:-1, :-2]
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
To further optimize the kernel we switch to Cython, to get a compiled C version. To further optimize the kernel we switch to Cython, to get a compiled C version.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
%%cython %%cython
import cython import cython
@cython.boundscheck(False) @cython.boundscheck(False)
@cython.wraparound(False) @cython.wraparound(False)
def avg_cython(object[double, ndim=2] input_arr, object[double, ndim=2] output_arr): def avg_cython(object[double, ndim=2] src, object[double, ndim=2] dst):
cdef int xs, ys, x, y cdef int xs, ys, x, y
xs, ys = input_arr.shape xs, ys = src.shape
for x in range(1, xs - 1): for x in range(1, xs - 1):
for y in range(1, ys - 1): for y in range(1, ys - 1):
output_arr[x, y] = (input_arr[x + 1, y] + input_arr[x - 1, y] + dst[x, y] = (src[x + 1, y] + src[x - 1, y] +
input_arr[x, y + 1] + input_arr[x, y - 1]) / 4 src[x, y + 1] + src[x, y - 1]) / 4
```
%% Cell type:markdown id: tags:
If available, we also try the numba just-in-time compiler
%% Cell type:code id: tags:
``` python
try:
from numba import jit
@jit(nopython=True)
def avg_numba(src, dst):
dst[1:-1, 1:-1] = src[2:, 1:-1] + src[:-2, 1:-1] + \
src[1:-1, 2:] + src[1:-1, :-2]
except ImportError:
pass
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
And finally we also create a *pystencils* version of the same stencil code: And finally we also create a *pystencils* version of the same stencil code:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
src, dst = ps.fields("src, dst: [2D]") src, dst = ps.fields("src, dst: [2D]")
update = ps.Assignment(dst[0,0], update = ps.Assignment(dst[0,0],
(src[1, 0] + src[-1, 0] + src[0, 1] + src[0, -1]) / 4) (src[1, 0] + src[-1, 0] + src[0, 1] + src[0, -1]) / 4)
kernel = ps.create_kernel(update).compile() avg_pystencils = ps.create_kernel(update).compile()
def avg_pystencils(input_arr, output_arr):
kernel(src=input_arr, dst=output_arr)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
all_implementations = { all_implementations = {
'pure Python': avg_pure_python, 'pure Python': avg_pure_python,
'numpy roll': avg_numpy_roll, 'numpy roll': avg_numpy_roll,
'numpy slice': avg_numpy_slice, 'numpy slice': avg_numpy_slice,
'Cython': None,
'pystencils': avg_pystencils, 'pystencils': avg_pystencils,
} }
if 'avg_cython' in globals(): if 'avg_cython' in globals():
all_implementations['Cython'] = avg_cython all_implementations['Cython'] = avg_cython
else: if 'avg_numba' in globals():
del all_implementations['Cython'] all_implementations['numba'] = avg_numba
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Benchmark functions ## Benchmark functions
We implement a short function to get in- and output arrays of a given shape and to measure the runtime. We implement a short function to get in- and output arrays of a given shape and to measure the runtime.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def get_arrays(shape): def get_arrays(shape):
in_arr = np.random.rand(*shape) in_arr = np.random.rand(*shape)
out_arr = np.empty_like(in_arr) out_arr = np.empty_like(in_arr)
return in_arr, out_arr return in_arr, out_arr
def do_benchmark(func, shape): def do_benchmark(func, shape):
in_arr, out_arr = get_arrays(shape) in_arr, out_arr = get_arrays(shape)
timer = timeit.Timer('f(a, b)', globals={'f': func, 'a': in_arr, 'b': out_arr}) func(src=in_arr, dst=out_arr) # warmup
timer = timeit.Timer('f(src=src, dst=dst)', globals={'f': func, 'src': in_arr, 'dst': out_arr})
calls, time_taken = timer.autorange() calls, time_taken = timer.autorange()
return time_taken / calls return time_taken / calls
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Comparison ## Comparison
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
plot_order = ['pystencils', 'Cython', 'numba', 'numpy slice', 'numpy roll', 'pure Python']
plot_order = [p for p in plot_order if p in all_implementations]
def bar_plot(*shape): def bar_plot(*shape):
names = tuple(all_implementations.keys()) names = plot_order
runtimes = tuple(do_benchmark(all_implementations[name], shape) for name in names) runtimes = tuple(do_benchmark(all_implementations[name], shape) for name in names)
for runtime, name in zip(runtimes, names):
assert runtime >= runtimes[names.index('pystencils')], runtimes
speedups = tuple(runtime / min(runtimes) for runtime in runtimes) speedups = tuple(runtime / min(runtimes) for runtime in runtimes)
y_pos = np.arange(len(names)) y_pos = np.arange(len(names))
labels = tuple(f"{name} ({round(speedup, 1)} x)" for name, speedup in zip(names, speedups)) labels = tuple(f"{name} ({round(speedup, 1)} x)" for name, speedup in zip(names, speedups))
plt.text(0.5, 0.5, f"Size {shape}", horizontalalignment='center', fontsize=16, plt.text(0.5, 0.5, f"Size {shape}", horizontalalignment='center', fontsize=16,
verticalalignment='center', transform=plt.gca().transAxes) verticalalignment='center', transform=plt.gca().transAxes)
plt.barh(y_pos, runtimes, log=True) plt.barh(y_pos, runtimes, log=True)
plt.yticks(y_pos, labels); plt.yticks(y_pos, labels);
plt.xlabel('Runtime of single iteration') plt.xlabel('Runtime of single iteration')
plt.figure(figsize=(8, 8)) plt.figure(figsize=(8, 8))
plt.subplot(3, 1, 1) plt.subplot(3, 1, 1)
bar_plot(16, 16) bar_plot(32, 32)
plt.subplot(3, 1, 2) plt.subplot(3, 1, 2)
bar_plot(128, 128) bar_plot(128, 128)
plt.subplot(3, 1, 3) plt.subplot(3, 1, 3)
bar_plot(1024, 1024) bar_plot(2048, 2048)
``` ```
%% Output %% Output
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
All runtimes are plotted logarithmically. Next number next to the labels shows how much slower the version is than the fastest one. For small arrays Cython produces faster code than *pystencils*. The larger the arrays, the better pystencils gets. All runtimes are plotted logarithmically. Numbers next to the labels show how much slower the version is than the fastest one.
......
%% Cell type:code id: tags:
``` python
from pystencils.session import *
```
%% Cell type:markdown id: tags:
# Demo: Working with derivatives
## Overview
This notebook demonstrates how to formulate continuous differential operators in *pystencils* and automatically derive finite difference stencils from them.
Instead of using the built-in derivatives in *sympy*, *pystencils* comes with its own derivative objects. They represent spatial derivatives of pystencils fields.
%% Cell type:code id: tags:
``` python
f = ps.fields("f: [2D]")
first_derivative_x = ps.fd.diff(f, 0)
first_derivative_x
```
%% Output
$\displaystyle {\partial_{0} {f}_{(0,0)}}$
D(f[0,0])
%% Cell type:markdown id: tags:
This object is the derivative of the field $f$ with respect to the first spatial coordinate $x$. To get a finite difference approximation a discretization strategy is required:
%% Cell type:code id: tags:
``` python
discretize_2nd_order = ps.fd.Discretization2ndOrder(dx=sp.symbols("h"))
discretize_2nd_order(first_derivative_x)
```
%% Output
$\displaystyle \frac{{f}_{(1,0)} - {f}_{(-1,0)}}{2 h}$
f_E - f_W
─────────
2⋅h
%% Cell type:markdown id: tags:
Strictly speaking, derivative objects act on *field accesses*, not *fields*, that why there is a $(0,0)$ index at the field:
%% Cell type:code id: tags:
``` python
first_derivative_x
```
%% Output
$\displaystyle {\partial_{0} {f}_{(0,0)}}$
D(f[0,0])
%% Cell type:markdown id: tags:
Sometimes it might be useful to specify derivatives at an offset e.g.
%% Cell type:code id: tags:
``` python
derivative_offset = ps.fd.diff(f[0, 1], 0)
derivative_offset, discretize_2nd_order(derivative_offset)
```
%% Output
$\displaystyle \left( {\partial_{0} {f}_{(0,1)}}, \ \frac{{f}_{(1,1)} - {f}_{(-1,1)}}{2 h}\right)$
⎛ f_NE - f_NW⎞
⎜D(f[0,1]), ───────────⎟
⎝ 2⋅h ⎠
%% Cell type:markdown id: tags:
Another example with second order derivatives:
%% Cell type:code id: tags:
``` python
laplacian = ps.fd.diff(f, 0, 0) + ps.fd.diff(f, 1, 1)
laplacian
```
%% Output
$\displaystyle {\partial_{0} {\partial_{0} {f}_{(0,0)}}} + {\partial_{1} {\partial_{1} {f}_{(0,0)}}}$
D(D(f[0,0])) + D(D(f[0,0]))
%% Cell type:code id: tags:
``` python
discretize_2nd_order(laplacian)
```
%% Output
$\displaystyle \frac{- 2 {f}_{(0,0)} + {f}_{(1,0)} + {f}_{(-1,0)}}{h^{2}} + \frac{- 2 {f}_{(0,0)} + {f}_{(0,1)} + {f}_{(0,-1)}}{h^{2}}$
-2⋅f_C + f_E + f_W -2⋅f_C + f_N + f_S
────────────────── + ──────────────────
2 2
h h
%% Cell type:markdown id: tags:
## Working with derivative terms
No automatic simplifications are done on derivative terms i.e. linearity relations or product rule are not applied automatically.
%% Cell type:code id: tags:
``` python
f, g = ps.fields("f, g :[2D]")
c = sp.symbols("c")
δ = ps.fd.diff
expr = δ( δ(f, 0) + δ(g, 0) + c + 5 , 0)
expr
```
%% Output
$\displaystyle {\partial_{0} (c + {\partial_{0} {f}_{(0,0)}} + {\partial_{0} {g}_{(0,0)}} + 5) }$
D(c + Diff(f_C, 0, -1) + Diff(g_C, 0, -1) + 5)
%% Cell type:markdown id: tags:
This nested term can not be discretized automatically.
%% Cell type:code id: tags:
``` python
try:
discretize_2nd_order(expr)
except ValueError as e:
print(e)
```
%% Output
Only derivatives with field or field accesses as arguments can be discretized
%% Cell type:markdown id: tags:
### Linearity
The following function expands all derivatives exploiting linearity:
%% Cell type:code id: tags:
``` python
ps.fd.expand_diff_linear(expr)
```
%% Output
$\displaystyle {\partial_{0} c} + {\partial_{0} {\partial_{0} {f}_{(0,0)}}} + {\partial_{0} {\partial_{0} {g}_{(0,0)}}}$
D(c) + D(D(f[0,0])) + D(D(g[0,0]))
%% Cell type:markdown id: tags:
The symbol $c$ that was included is interpreted as a function by default.
We can control the simplification behaviour by specifying all functions or all constants:
%% Cell type:code id: tags:
``` python
ps.fd.expand_diff_linear(expr, functions=(f[0,0], g[0, 0]))
```
%% Output
$\displaystyle {\partial_{0} {\partial_{0} {f}_{(0,0)}}} + {\partial_{0} {\partial_{0} {g}_{(0,0)}}}$
D(D(f[0,0])) + D(D(g[0,0]))
%% Cell type:code id: tags:
``` python
ps.fd.expand_diff_linear(expr, constants=[c])
```
%% Output
$\displaystyle {\partial_{0} {\partial_{0} {f}_{(0,0)}}} + {\partial_{0} {\partial_{0} {g}_{(0,0)}}}$
D(D(f[0,0])) + D(D(g[0,0]))
%% Cell type:markdown id: tags:
The expanded term can then be discretized:
%% Cell type:code id: tags:
``` python
discretize_2nd_order(ps.fd.expand_diff_linear(expr, constants=[c]))
```
%% Output
$\displaystyle \frac{- 2 {f}_{(0,0)} + {f}_{(1,0)} + {f}_{(-1,0)}}{h^{2}} + \frac{- 2 {g}_{(0,0)} + {g}_{(1,0)} + {g}_{(-1,0)}}{h^{2}}$
-2⋅f_C + f_E + f_W -2⋅g_C + g_E + g_W
────────────────── + ──────────────────
2 2
h h
%% Cell type:markdown id: tags:
### Product rule
The next cells show how to apply product rule and its reverse:
%% Cell type:code id: tags:
``` python
expr = δ(f[0, 0] * g[0, 0], 0 )
expr
```
%% Output
$\displaystyle {\partial_{0} ({f}_{(0,0)} {g}_{(0,0)}) }$
D(f_C*g_C)
%% Cell type:code id: tags:
``` python
expanded_expr = ps.fd.expand_diff_products(expr)
expanded_expr
```
%% Output
$\displaystyle {f}_{(0,0)} {\partial_{0} {g}_{(0,0)}} + {g}_{(0,0)} {\partial_{0} {f}_{(0,0)}}$
f_C⋅D(g[0,0]) + g_C⋅D(f[0,0])
%% Cell type:code id: tags:
``` python
recombined_expr = ps.fd.combine_diff_products(expanded_expr)
recombined_expr
```
%% Output
$\displaystyle {\partial_{0} ({f}_{(0,0)} {g}_{(0,0)}) }$
D(f_C*g_C)
%% Cell type:code id: tags:
``` python
assert recombined_expr == expr
```
%% Cell type:markdown id: tags:
### Evaluate derivatives
Arguments of derivative need not be to be fields, only when trying to discretize them.
The next cells show how to transform them to *sympy* derivatives and evaluate them.
%% Cell type:code id: tags:
``` python
k = sp.symbols("k")
expr = δ(k**3 + 2 * k, 0 )
expr
```
%% Output
$\displaystyle {\partial_{0} (k^{3} + 2 k) }$
D(k**3 + 2*k)
%% Cell type:code id: tags:
``` python
ps.fd.evaluate_diffs(expr, var=k)
```
%% Output
$\displaystyle 3 k^{2} + 2$
2
3⋅k + 2
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
...@@ -5,10 +5,12 @@ API Reference ...@@ -5,10 +5,12 @@ API Reference
:maxdepth: 3 :maxdepth: 3
kernel_compile_and_call.rst kernel_compile_and_call.rst
enums.rst
simplifications.rst simplifications.rst
datahandling.rst datahandling.rst
configuration.rst configuration.rst
field.rst field.rst
stencil.rst
finite_differences.rst finite_differences.rst
plot.rst plot.rst
ast.rst ast.rst
************
Enumerations
************
.. automodule:: pystencils.enums
:members:
...@@ -8,9 +8,14 @@ Creating kernels ...@@ -8,9 +8,14 @@ Creating kernels
.. autofunction:: pystencils.create_kernel .. autofunction:: pystencils.create_kernel
.. autofunction:: pystencils.create_indexed_kernel .. autoclass:: pystencils.CreateKernelConfig
:members:
.. autofunction:: pystencils.create_staggered_kernel .. autofunction:: pystencils.kernelcreation.create_domain_kernel
.. autofunction:: pystencils.kernelcreation.create_indexed_kernel
.. autofunction:: pystencils.kernelcreation.create_staggered_kernel
Code printing Code printing
...@@ -22,11 +27,11 @@ Code printing ...@@ -22,11 +27,11 @@ Code printing
GPU Indexing GPU Indexing
------------- -------------
.. autoclass:: pystencils.gpucuda.AbstractIndexing .. autoclass:: pystencils.gpu.AbstractIndexing
:members: :members:
.. autoclass:: pystencils.gpucuda.BlockIndexing .. autoclass:: pystencils.gpu.BlockIndexing
:members: :members:
.. autoclass:: pystencils.gpucuda.LineIndexing .. autoclass:: pystencils.gpu.LineIndexing
:members: :members:
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
Plotting and Animation Plotting and Animation
********************** **********************
.. automodule:: pystencils.plot2d .. automodule:: pystencils.plot
:members: :members:
...@@ -10,13 +10,27 @@ AssignmentCollection ...@@ -10,13 +10,27 @@ AssignmentCollection
:members: :members:
SimplificationStrategy
======================
.. autoclass:: pystencils.simp.SimplificationStrategy
:members:
Simplifications Simplifications
=============== ===============
.. automodule:: pystencils.simp .. automodule:: pystencils.simp.simplifications
:members: :members:
Subexpression insertion
=======================
The subexpression insertions have the goal to insert subexpressions which will not reduce the number of FLOPs.
For example a constant value kept as subexpression will lead to a new variable in the code which will occupy
a register slot. On the other side a single variable could just be inserted in all assignments.
.. automodule:: pystencils.simp.subexpression_insertion
:members:
......
*******
Stencil
*******
.. automodule:: pystencils.stencil
:members:
...@@ -15,6 +15,7 @@ It is a good idea to download them and run them directly to be able to play arou ...@@ -15,6 +15,7 @@ It is a good idea to download them and run them directly to be able to play arou
/notebooks/05_tutorial_phasefield_spinodal_decomposition.ipynb /notebooks/05_tutorial_phasefield_spinodal_decomposition.ipynb
/notebooks/06_tutorial_phasefield_dentritic_growth.ipynb /notebooks/06_tutorial_phasefield_dentritic_growth.ipynb
/notebooks/demo_assignment_collection.ipynb /notebooks/demo_assignment_collection.ipynb
/notebooks/demo_plotting_and_animation.ipynb
/notebooks/demo_derivatives.ipynb
/notebooks/demo_benchmark.ipynb /notebooks/demo_benchmark.ipynb
/notebooks/demo_wave_equation.ipynb /notebooks/demo_wave_equation.ipynb
[project]
name = "pystencils"
description = "Speeding up stencil computations on CPUs and GPUs"
dynamic = ["version"]
readme = "README.md"
authors = [
{ name = "Martin Bauer" },
{ name = "Jan Hönig " },
{ name = "Markus Holzer" },
{ name = "Frederik Hennig" },
{ email = "cs10-codegen@fau.de" },
]
license = { file = "COPYING.txt" }
requires-python = ">=3.10"
dependencies = ["sympy>=1.9,<=1.12.1", "numpy>=1.8.0", "appdirs", "joblib", "pyyaml", "fasteners"]
classifiers = [
"Development Status :: 4 - Beta",
"Framework :: Jupyter",
"Topic :: Software Development :: Code Generators",
"Topic :: Scientific/Engineering :: Physics",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
]
[project.urls]
"Bug Tracker" = "https://i10git.cs.fau.de/pycodegen/pystencils/-/issues"
"Documentation" = "https://pycodegen.pages.i10git.cs.fau.de/pystencils/"
"Source Code" = "https://i10git.cs.fau.de/pycodegen/pystencils"
[project.optional-dependencies]
gpu = ['cupy']
alltrafos = ['islpy', 'py-cpuinfo']
bench_db = ['blitzdb', 'pymongo', 'pandas']
interactive = [
'matplotlib',
'ipy_table',
'imageio',
'jupyter',
'pyevtk',
'rich',
'graphviz',
]
use_cython = [
'Cython'
]
doc = [
'sphinx',
'sphinx_rtd_theme',
'nbsphinx',
'sphinxcontrib-bibtex',
'sphinx_autodoc_typehints',
'pandoc',
]
tests = [
'pytest',
'pytest-cov',
'pytest-html',
'ansi2html',
'pytest-xdist',
'flake8',
'nbformat',
'nbconvert',
'ipython',
'matplotlib',
'py-cpuinfo',
'randomgen>=1.18',
]
[build-system]
requires = [
"setuptools>=61",
"versioneer[toml]>=0.29",
# 'Cython'
]
build-backend = "setuptools.build_meta"
[tool.setuptools.package-data]
pystencils = [
"include/*.h",
"boundaries/createindexlistcython.pyx"
]
[tool.setuptools.packages.find]
where = ["src"]
include = ["pystencils", "pystencils.*"]
namespaces = false
[tool.versioneer]
# See the docstring in versioneer.py for instructions. Note that you must
# re-run 'versioneer.py setup' after changing this section, and commit the
# resulting files.
VCS = "git"
style = "pep440"
versionfile_source = "src/pystencils/_version.py"
versionfile_build = "pystencils/_version.py"
tag_prefix = "release/"
parentdir_prefix = "pystencils-"
import sympy as sp
from collections import namedtuple
from sympy.core import S
from typing import Set
from sympy.printing.ccode import C89CodePrinter
from pystencils.fast_approximation import fast_division, fast_sqrt, fast_inv_sqrt
try:
from sympy.printing.ccode import C99CodePrinter as CCodePrinter
except ImportError:
from sympy.printing.ccode import CCodePrinter # for sympy versions < 1.1
from pystencils.integer_functions import bitwise_xor, bit_shift_right, bit_shift_left, bitwise_and, \
bitwise_or, modulo_ceil
from pystencils.astnodes import Node, KernelFunction
from pystencils.data_types import create_type, PointerType, get_type_of_expression, VectorType, cast_func, \
vector_memory_access, reinterpret_cast_func
__all__ = ['generate_c', 'CustomCodeNode', 'PrintNode', 'get_headers', 'CustomSympyPrinter']
def generate_c(ast_node: Node, signature_only: bool = False, dialect='c') -> str:
"""Prints an abstract syntax tree node as C or CUDA code.
This function does not need to distinguish between C, C++ or CUDA code, it just prints 'C-like' code as encoded
in the abstract syntax tree (AST). The AST is built differently for C or CUDA by calling different create_kernel
functions.
Args:
ast_node:
signature_only:
dialect: 'c' or 'cuda'
Returns:
C-like code for the ast node and its descendants
"""
printer = CBackend(signature_only=signature_only,
vector_instruction_set=ast_node.instruction_set,
dialect=dialect)
return printer(ast_node)
def get_headers(ast_node: Node) -> Set[str]:
"""Return a set of header files, necessary to compile the printed C-like code."""
headers = set()
if isinstance(ast_node, KernelFunction) and ast_node.instruction_set:
headers.update(ast_node.instruction_set['headers'])
if hasattr(ast_node, 'headers'):
headers.update(ast_node.headers)
for a in ast_node.args:
if isinstance(a, Node):
headers.update(get_headers(a))
return headers
# --------------------------------------- Backend Specific Nodes -------------------------------------------------------
class CustomCodeNode(Node):
def __init__(self, code, symbols_read, symbols_defined, parent=None):
super(CustomCodeNode, self).__init__(parent=parent)
self._code = "\n" + code
self._symbolsRead = set(symbols_read)
self._symbolsDefined = set(symbols_defined)
self.headers = []
def get_code(self, dialect, vector_instruction_set):
return self._code
@property
def args(self):
return []
@property
def symbols_defined(self):
return self._symbolsDefined
@property
def undefined_symbols(self):
return self.symbols_defined - self._symbolsRead
class PrintNode(CustomCodeNode):
# noinspection SpellCheckingInspection
def __init__(self, symbol_to_print):
code = '\nstd::cout << "%s = " << %s << std::endl; \n' % (symbol_to_print.name, symbol_to_print.name)
super(PrintNode, self).__init__(code, symbols_read=[symbol_to_print], symbols_defined=set())
self.headers.append("<iostream>")
# ------------------------------------------- Printer ------------------------------------------------------------------
# noinspection PyPep8Naming
class CBackend:
def __init__(self, sympy_printer=None,
signature_only=False, vector_instruction_set=None, dialect='c'):
if sympy_printer is None:
if vector_instruction_set is not None:
self.sympy_printer = VectorizedCustomSympyPrinter(vector_instruction_set, dialect)
else:
self.sympy_printer = CustomSympyPrinter(dialect)
else:
self.sympy_printer = sympy_printer
self._vector_instruction_set = vector_instruction_set
self._indent = " "
self._dialect = dialect
self._signatureOnly = signature_only
def __call__(self, node):
prev_is = VectorType.instruction_set
VectorType.instruction_set = self._vector_instruction_set
result = str(self._print(node))
VectorType.instruction_set = prev_is
return result
def _print(self, node):
for cls in type(node).__mro__:
method_name = "_print_" + cls.__name__
if hasattr(self, method_name):
return getattr(self, method_name)(node)
raise NotImplementedError("CBackend does not support node of type " + str(type(node)))
def _print_KernelFunction(self, node):
function_arguments = ["%s %s" % (str(s.symbol.dtype), s.symbol.name) for s in node.get_parameters()]
func_declaration = "FUNC_PREFIX void %s(%s)" % (node.function_name, ", ".join(function_arguments))
if self._signatureOnly:
return func_declaration
body = self._print(node.body)
return func_declaration + "\n" + body
def _print_Block(self, node):
block_contents = "\n".join([self._print(child) for child in node.args])
return "{\n%s\n}" % (self._indent + self._indent.join(block_contents.splitlines(True)))
def _print_PragmaBlock(self, node):
return "%s\n%s" % (node.pragma_line, self._print_Block(node))
def _print_LoopOverCoordinate(self, node):
counter_symbol = node.loop_counter_name
start = "int %s = %s" % (counter_symbol, self.sympy_printer.doprint(node.start))
condition = "%s < %s" % (counter_symbol, self.sympy_printer.doprint(node.stop))
update = "%s += %s" % (counter_symbol, self.sympy_printer.doprint(node.step),)
loop_str = "for (%s; %s; %s)" % (start, condition, update)
prefix = "\n".join(node.prefix_lines)
if prefix:
prefix += "\n"
return "%s%s\n%s" % (prefix, loop_str, self._print(node.body))
def _print_SympyAssignment(self, node):
if node.is_declaration:
data_type = "const " + str(node.lhs.dtype) + " " if node.is_const else str(node.lhs.dtype) + " "
return "%s%s = %s;" % (data_type, self.sympy_printer.doprint(node.lhs),
self.sympy_printer.doprint(node.rhs))
else:
lhs_type = get_type_of_expression(node.lhs)
if type(lhs_type) is VectorType and isinstance(node.lhs, cast_func):
arg, data_type, aligned, nontemporal = node.lhs.args
instr = 'storeU'
if aligned:
instr = 'stream' if nontemporal else 'storeA'
rhs_type = get_type_of_expression(node.rhs)
if type(rhs_type) is not VectorType:
rhs = cast_func(node.rhs, VectorType(rhs_type))
else:
rhs = node.rhs
return self._vector_instruction_set[instr].format("&" + self.sympy_printer.doprint(node.lhs.args[0]),
self.sympy_printer.doprint(rhs)) + ';'
else:
return "%s = %s;" % (self.sympy_printer.doprint(node.lhs), self.sympy_printer.doprint(node.rhs))
def _print_TemporaryMemoryAllocation(self, node):
align = 64
np_dtype = node.symbol.dtype.base_type.numpy_dtype
required_size = np_dtype.itemsize * node.size + align
size = modulo_ceil(required_size, align)
code = "{dtype} {name}=({dtype})aligned_alloc({align}, {size}) + {offset};"
return code.format(dtype=node.symbol.dtype,
name=self.sympy_printer.doprint(node.symbol.name),
size=self.sympy_printer.doprint(size),
offset=int(node.offset(align)),
align=align)
def _print_TemporaryMemoryFree(self, node):
align = 64
return "free(%s - %d);" % (self.sympy_printer.doprint(node.symbol.name), node.offset(align))
def _print_CustomCodeNode(self, node):
return node.get_code(self._dialect, self._vector_instruction_set)
def _print_Conditional(self, node):
condition_expr = self.sympy_printer.doprint(node.condition_expr)
true_block = self._print_Block(node.true_block)
result = "if (%s)\n%s " % (condition_expr, true_block)
if node.false_block:
false_block = self._print_Block(node.false_block)
result += "else " + false_block
return result
# ------------------------------------------ Helper function & classes -------------------------------------------------
# noinspection PyPep8Naming
class CustomSympyPrinter(CCodePrinter):
def __init__(self, dialect):
super(CustomSympyPrinter, self).__init__()
self._float_type = create_type("float32")
self._dialect = dialect
if 'Min' in self.known_functions:
del self.known_functions['Min']
if 'Max' in self.known_functions:
del self.known_functions['Max']
def _print_Pow(self, expr):
"""Don't use std::pow function, for small integer exponents, write as multiplication"""
if expr.exp.is_integer and expr.exp.is_number and 0 < expr.exp < 8:
return "(" + self._print(sp.Mul(*[expr.base] * expr.exp, evaluate=False)) + ")"
elif expr.exp.is_integer and expr.exp.is_number and - 8 < expr.exp < 0:
return "1 / ({})".format(self._print(sp.Mul(*[expr.base] * (-expr.exp), evaluate=False)))
else:
return super(CustomSympyPrinter, self)._print_Pow(expr)
def _print_Rational(self, expr):
"""Evaluate all rationals i.e. print 0.25 instead of 1.0/4.0"""
res = str(expr.evalf().num)
return res
def _print_Equality(self, expr):
"""Equality operator is not printable in default printer"""
return '((' + self._print(expr.lhs) + ") == (" + self._print(expr.rhs) + '))'
def _print_Piecewise(self, expr):
"""Print piecewise in one line (remove newlines)"""
result = super(CustomSympyPrinter, self)._print_Piecewise(expr)
return result.replace("\n", "")
def _print_Function(self, expr):
infix_functions = {
bitwise_xor: '^',
bit_shift_right: '>>',
bit_shift_left: '<<',
bitwise_or: '|',
bitwise_and: '&',
}
if hasattr(expr, 'to_c'):
return expr.to_c(self._print)
if isinstance(expr, reinterpret_cast_func):
arg, data_type = expr.args
return "*((%s)(& %s))" % (PointerType(data_type, restrict=False), self._print(arg))
elif isinstance(expr, cast_func):
arg, data_type = expr.args
if isinstance(arg, sp.Number):
return self._typed_number(arg, data_type)
else:
return "((%s)(%s))" % (data_type, self._print(arg))
elif isinstance(expr, fast_division):
if self._dialect == "cuda":
return "__fdividef(%s, %s)" % tuple(self._print(a) for a in expr.args)
else:
return "({})".format(self._print(expr.args[0] / expr.args[1]))
elif isinstance(expr, fast_sqrt):
if self._dialect == "cuda":
return "__fsqrt_rn(%s)" % tuple(self._print(a) for a in expr.args)
else:
return "({})".format(self._print(sp.sqrt(expr.args[0])))
elif isinstance(expr, fast_inv_sqrt):
if self._dialect == "cuda":
return "__frsqrt_rn(%s)" % tuple(self._print(a) for a in expr.args)
else:
return "({})".format(self._print(1 / sp.sqrt(expr.args[0])))
elif expr.func in infix_functions:
return "(%s %s %s)" % (self._print(expr.args[0]), infix_functions[expr.func], self._print(expr.args[1]))
else:
return super(CustomSympyPrinter, self)._print_Function(expr)
def _typed_number(self, number, dtype):
res = self._print(number)
if dtype.is_float():
if dtype == self._float_type:
if '.' not in res:
res += ".0f"
else:
res += "f"
return res
else:
return res
_print_Max = C89CodePrinter._print_Max
_print_Min = C89CodePrinter._print_Min
# noinspection PyPep8Naming
class VectorizedCustomSympyPrinter(CustomSympyPrinter):
SummandInfo = namedtuple("SummandInfo", ['sign', 'term'])
def __init__(self, instruction_set, dialect):
super(VectorizedCustomSympyPrinter, self).__init__(dialect=dialect)
self.instruction_set = instruction_set
def _scalarFallback(self, func_name, expr, *args, **kwargs):
expr_type = get_type_of_expression(expr)
if type(expr_type) is not VectorType:
return getattr(super(VectorizedCustomSympyPrinter, self), func_name)(expr, *args, **kwargs)
else:
assert self.instruction_set['width'] == expr_type.width
return None
def _print_Function(self, expr):
if isinstance(expr, vector_memory_access):
arg, data_type, aligned, _ = expr.args
instruction = self.instruction_set['loadA'] if aligned else self.instruction_set['loadU']
return instruction.format("& " + self._print(arg))
elif isinstance(expr, cast_func):
arg, data_type = expr.args
if type(data_type) is VectorType:
return self.instruction_set['makeVec'].format(self._print(arg))
elif expr.func == fast_division:
result = self._scalarFallback('_print_Function', expr)
if not result:
return self.instruction_set['/'].format(self._print(expr.args[0]), self._print(expr.args[1]))
elif expr.func == fast_sqrt:
return "({})".format(self._print(sp.sqrt(expr.args[0])))
elif expr.func == fast_inv_sqrt:
result = self._scalarFallback('_print_Function', expr)
if not result:
if self.instruction_set['rsqrt']:
return self.instruction_set['rsqrt'].format(self._print(expr.args[0]))
else:
return "({})".format(self._print(1 / sp.sqrt(expr.args[0])))
return super(VectorizedCustomSympyPrinter, self)._print_Function(expr)
def _print_And(self, expr):
result = self._scalarFallback('_print_And', expr)
if result:
return result
arg_strings = [self._print(a) for a in expr.args]
assert len(arg_strings) > 0
result = arg_strings[0]
for item in arg_strings[1:]:
result = self.instruction_set['&'].format(result, item)
return result
def _print_Or(self, expr):
result = self._scalarFallback('_print_Or', expr)
if result:
return result
arg_strings = [self._print(a) for a in expr.args]
assert len(arg_strings) > 0
result = arg_strings[0]
for item in arg_strings[1:]:
result = self.instruction_set['|'].format(result, item)
return result
def _print_Add(self, expr, order=None):
result = self._scalarFallback('_print_Add', expr)
if result:
return result
summands = []
for term in expr.args:
if term.func == sp.Mul:
sign, t = self._print_Mul(term, inside_add=True)
else:
t = self._print(term)
sign = 1
summands.append(self.SummandInfo(sign, t))
# Use positive terms first
summands.sort(key=lambda e: e.sign, reverse=True)
# if no positive term exists, prepend a zero
if summands[0].sign == -1:
summands.insert(0, self.SummandInfo(1, "0"))
assert len(summands) >= 2
processed = summands[0].term
for summand in summands[1:]:
func = self.instruction_set['-'] if summand.sign == -1 else self.instruction_set['+']
processed = func.format(processed, summand.term)
return processed
def _print_Pow(self, expr):
result = self._scalarFallback('_print_Pow', expr)
if result:
return result
one = self.instruction_set['makeVec'].format(1.0)
if expr.exp.is_integer and expr.exp.is_number and 0 < expr.exp < 8:
return "(" + self._print(sp.Mul(*[expr.base] * expr.exp, evaluate=False)) + ")"
elif expr.exp == -1:
one = self.instruction_set['makeVec'].format(1.0)
return self.instruction_set['/'].format(one, self._print(expr.base))
elif expr.exp == 0.5:
return self.instruction_set['sqrt'].format(self._print(expr.base))
elif expr.exp == -0.5:
root = self.instruction_set['sqrt'].format(self._print(expr.base))
return self.instruction_set['/'].format(one, root)
elif expr.exp.is_integer and expr.exp.is_number and - 8 < expr.exp < 0:
return self.instruction_set['/'].format(one,
self._print(sp.Mul(*[expr.base] * (-expr.exp), evaluate=False)))
else:
raise ValueError("Generic exponential not supported: " + str(expr))
def _print_Mul(self, expr, inside_add=False):
# noinspection PyProtectedMember
from sympy.core.mul import _keep_coeff
result = self._scalarFallback('_print_Mul', expr)
if result:
return result
c, e = expr.as_coeff_Mul()
if c < 0:
expr = _keep_coeff(-c, e)
sign = -1
else:
sign = 1
a = [] # items in the numerator
b = [] # items that are in the denominator (if any)
# Gather args for numerator/denominator
for item in expr.as_ordered_factors():
if item.is_commutative and item.is_Pow and item.exp.is_Rational and item.exp.is_negative:
if item.exp != -1:
b.append(sp.Pow(item.base, -item.exp, evaluate=False))
else:
b.append(sp.Pow(item.base, -item.exp))
else:
a.append(item)
a = a or [S.One]
a_str = [self._print(x) for x in a]
b_str = [self._print(x) for x in b]
result = a_str[0]
for item in a_str[1:]:
result = self.instruction_set['*'].format(result, item)
if len(b) > 0:
denominator_str = b_str[0]
for item in b_str[1:]:
denominator_str = self.instruction_set['*'].format(denominator_str, item)
result = self.instruction_set['/'].format(result, denominator_str)
if inside_add:
return sign, result
else:
if sign < 0:
return self.instruction_set['*'].format(self._print(S.NegativeOne), result)
else:
return result
def _print_Relational(self, expr):
result = self._scalarFallback('_print_Relational', expr)
if result:
return result
return self.instruction_set[expr.rel_op].format(self._print(expr.lhs), self._print(expr.rhs))
def _print_Equality(self, expr):
result = self._scalarFallback('_print_Equality', expr)
if result:
return result
return self.instruction_set['=='].format(self._print(expr.lhs), self._print(expr.rhs))
def _print_Piecewise(self, expr):
result = self._scalarFallback('_print_Piecewise', expr)
if result:
return result
if expr.args[-1].cond.args[0] is not sp.sympify(True):
# We need the last conditional to be a True, otherwise the resulting
# function may not return a result.
raise ValueError("All Piecewise expressions must contain an "
"(expr, True) statement to be used as a default "
"condition. Without one, the generated "
"expression may not evaluate to anything under "
"some condition.")
result = self._print(expr.args[-1][0])
for true_expr, condition in reversed(expr.args[:-1]):
# noinspection SpellCheckingInspection
result = self.instruction_set['blendv'].format(result, self._print(true_expr), self._print(condition))
return result