Commit 43393627 authored by Jan Hönig's avatar Jan Hönig
Browse files

Merge branch 'testing' into 'master'

Testing

See merge request pycodegen/pystencils!272
parents a3ba7708 fa121c19
...@@ -18,6 +18,7 @@ test-report ...@@ -18,6 +18,7 @@ test-report
pystencils/boundaries/createindexlistcython.c pystencils/boundaries/createindexlistcython.c
pystencils/boundaries/createindexlistcython.*.so pystencils/boundaries/createindexlistcython.*.so
pystencils_tests/tmp pystencils_tests/tmp
pystencils_tests/var
pystencils_tests/kerncraft_inputs/.2d-5pt.c_kerncraft/ pystencils_tests/kerncraft_inputs/.2d-5pt.c_kerncraft/
pystencils_tests/kerncraft_inputs/.3d-7pt.c_kerncraft/ pystencils_tests/kerncraft_inputs/.3d-7pt.c_kerncraft/
report.xml report.xml
......
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import psutil import psutil
from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot, cm from matplotlib import pyplot, cm
from pystencils.session import * from pystencils.session import *
from pystencils.boundaries import add_neumann_boundary, Neumann, Dirichlet, BoundaryHandling from pystencils.boundaries import add_neumann_boundary, Neumann, Dirichlet, BoundaryHandling
from pystencils.slicing import slice_from_direction from pystencils.slicing import slice_from_direction
import math import math
import time import time
%matplotlib inline %matplotlib inline
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Test to see if pycuda is installed which is needed to run calculations on the GPU Test to see if pycuda is installed which is needed to run calculations on the GPU
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
try: try:
import pycuda import pycuda
gpu = True gpu = True
except ImportError: except ImportError:
gpu = False gpu = False
pycuda = None pycuda = None
print('No pycuda installed') print('No pycuda installed')
if pycuda: if pycuda:
import pycuda.gpuarray as gpuarray import pycuda.gpuarray as gpuarray
``` ```
%%%% Output: stream %%%% Output: stream
No pycuda installed No pycuda installed
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Tutorial 03: Datahandling # Tutorial 03: Datahandling
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This is a tutorial about the `DataHandling` class of pystencils. This class is an abstraction layer to This is a tutorial about the `DataHandling` class of pystencils. This class is an abstraction layer to
- link numpy arrays to pystencils fields - link numpy arrays to pystencils fields
- handle CPU-GPU array transfer, such that one can write code that works on CPU and GPU - handle CPU-GPU array transfer, such that one can write code that works on CPU and GPU
- makes it possible to write MPI parallel simulations to run on distributed-memory clusters using the waLBerla library - makes it possible to write MPI parallel simulations to run on distributed-memory clusters using the waLBerla library
We will look at a small and easy example to demonstrate the usage of `DataHandling` objects. We will define an averaging kernel to every cell of an array, that writes the average of the neighbor cell values to the center. We will look at a small and easy example to demonstrate the usage of `DataHandling` objects. We will define an averaging kernel to every cell of an array, that writes the average of the neighbor cell values to the center.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 1. Manual ## 1. Manual
### 1.1. CPU kernels ### 1.1. CPU kernels
In this first part, we set up a scenario manually without a `DataHandling`. In the next sections we then repeat the same setup with the help of the data handling. In this first part, we set up a scenario manually without a `DataHandling`. In the next sections we then repeat the same setup with the help of the data handling.
One concept of *pystencils* that may be confusing at first, is the differences between pystencils fields and numpy arrays. Fields are used to describe the computation *symbolically* with sympy, while numpy arrays hold the actual values where the computation is executed on. One concept of *pystencils* that may be confusing at first, is the differences between pystencils fields and numpy arrays. Fields are used to describe the computation *symbolically* with sympy, while numpy arrays hold the actual values where the computation is executed on.
One option to create and execute a *pystencils* kernel is listed below. For reasons that become clear later we call this the **variable-field-size workflow**: One option to create and execute a *pystencils* kernel is listed below. For reasons that become clear later we call this the **variable-field-size workflow**:
1. define pystencils fields 1. define pystencils fields
2. use sympy and the pystencils fields to define an update rule, that describes what should be done on *every cell* 2. use sympy and the pystencils fields to define an update rule, that describes what should be done on *every cell*
3. compile the update rule to a real function, that can be called from Python. For each field that was referenced in the symbolic description the function expects a numpy array, passed as named parameter 3. compile the update rule to a real function, that can be called from Python. For each field that was referenced in the symbolic description the function expects a numpy array, passed as named parameter
4. create some numpy arrays with actual data 4. create some numpy arrays with actual data
5. call the kernel - usually many times 5. call the kernel - usually many times
Now, lets see how this actually looks in Python code: Now, lets see how this actually looks in Python code:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# 1. field definitions # 1. field definitions
src_field, dst_field = ps.fields("src, dst:[2D]") src_field, dst_field = ps.fields("src, dst:[2D]")
# 2. define update rule # 2. define update rule
update_rule = [ps.Assignment(lhs=dst_field[0, 0], update_rule = [ps.Assignment(lhs=dst_field[0, 0],
rhs=(src_field[1, 0] + src_field[-1, 0] + rhs=(src_field[1, 0] + src_field[-1, 0] +
src_field[0, 1] + src_field[0, -1]) / 4)] src_field[0, 1] + src_field[0, -1]) / 4)]
# 3. compile update rule to function # 3. compile update rule to function
kernel_function = ps.create_kernel(update_rule).compile() kernel_function = ps.create_kernel(update_rule).compile()
# 4. create numpy arrays and call kernel # 4. create numpy arrays and call kernel
src_arr, dst_arr = np.random.rand(30, 30), np.zeros([30, 30]) src_arr, dst_arr = np.random.rand(30, 30), np.zeros([30, 30])
# 5. call kernel # 5. call kernel
kernel_function(src=src_arr, dst=dst_arr) # names of arguments have to match names passed to ps.fields() kernel_function(src=src_arr, dst=dst_arr) # names of arguments have to match names passed to ps.fields()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This workflow separates the symbolic and the numeric stages very cleanly. The separation also makes it possible to stop after step 3, write the C-code to a file and call the kernel from a C program. Speaking of the C-Code - lets have a look at the generated sources: This workflow separates the symbolic and the numeric stages very cleanly. The separation also makes it possible to stop after step 3, write the C-code to a file and call the kernel from a C program. Speaking of the C-Code - lets have a look at the generated sources:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ps.show_code(kernel_function.ast) ps.show_code(kernel_function.ast)
``` ```
%%%% Output: display_data %%%% Output: display_data
%%%% Output: display_data %%%% Output: display_data
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Even if it looks very ugly and low-level :) lets look at this code in a bit more detail. The code is generated in a way that it works for different array sizes. The size of the array is passed in the `_size_dst_` variables that specifiy the shape of the array for each dimension. Also, the memory layout (linearization) of the array can be different. That means the array could be stored in row-major or column-major order - if we pass in the array strides correctly the kernel does the right thing. If you're not familiar with the concept of strides check out [this stackoverflow post](https://stackoverflow.com/questions/53097952/how-to-understand-numpy-strides-for-layman) or search in the numpy documentation for strides - C vs Fortran order. Even if it looks very ugly and low-level :) lets look at this code in a bit more detail. The code is generated in a way that it works for different array sizes. The size of the array is passed in the `_size_dst_` variables that specifiy the shape of the array for each dimension. Also, the memory layout (linearization) of the array can be different. That means the array could be stored in row-major or column-major order - if we pass in the array strides correctly the kernel does the right thing. If you're not familiar with the concept of strides check out [this stackoverflow post](https://stackoverflow.com/questions/53097952/how-to-understand-numpy-strides-for-layman) or search in the numpy documentation for strides - C vs Fortran order.
The goal of *pystencils* is to produce the fastest possible code. One technique to do this is to use all available information already on compile time and generate code that is highly adapted to the specific problem. In our case we already know the shape and strides of the arrays we want to apply the kernel to, so we can make use of this information. This idea leads to the **fixed-field-size workflow**. The main difference there is that we define the arrays first and therefore let *pystencils* know about the array shapes and strides, so that it can generate more specific code: The goal of *pystencils* is to produce the fastest possible code. One technique to do this is to use all available information already on compile time and generate code that is highly adapted to the specific problem. In our case we already know the shape and strides of the arrays we want to apply the kernel to, so we can make use of this information. This idea leads to the **fixed-field-size workflow**. The main difference there is that we define the arrays first and therefore let *pystencils* know about the array shapes and strides, so that it can generate more specific code:
1. create numpy arrays that hold your data 1. create numpy arrays that hold your data
2. define pystencils fields, this time telling pystencils already which arrays they correspond to, so that it knows about the size and strides 2. define pystencils fields, this time telling pystencils already which arrays they correspond to, so that it knows about the size and strides
in the other steps nothing has changed in the other steps nothing has changed
3. define the update rule 3. define the update rule
4. compile update rule to kernel 4. compile update rule to kernel
5. run the kernel 5. run the kernel
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# 1. create arrays first # 1. create arrays first
src_arr, dst_arr = np.random.rand(30, 30), np.zeros([30, 30]) src_arr, dst_arr = np.random.rand(30, 30), np.zeros([30, 30])
# 2. define symbolic fields - note the additional parameter that link an array to each field # 2. define symbolic fields - note the additional parameter that link an array to each field
src_field, dst_field = ps.fields("src, dst:[2D]", src=src_arr, dst=dst_arr) src_field, dst_field = ps.fields("src, dst:[2D]", src=src_arr, dst=dst_arr)
# 3. define update rule # 3. define update rule
update_rule = [ps.Assignment(lhs=dst_field[0, 0], update_rule = [ps.Assignment(lhs=dst_field[0, 0],
rhs=(src_field[1, 0] + src_field[-1, 0] + rhs=(src_field[1, 0] + src_field[-1, 0] +
src_field[0, 1] + src_field[0, -1]) / 4)] src_field[0, 1] + src_field[0, -1]) / 4)]
# 4. compile it # 4. compile it
kernel_function = ps.create_kernel(update_rule).compile() kernel_function = ps.create_kernel(update_rule).compile()
# 5. call kernel # 5. call kernel
kernel_function(src=src_arr, dst=dst_arr) # names of arguments have to match names passed to ps.fields() kernel_function(src=src_arr, dst=dst_arr) # names of arguments have to match names passed to ps.fields()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Functionally, both variants are equivalent. We see the difference only when we look at the generated code Functionally, both variants are equivalent. We see the difference only when we look at the generated code
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ps.show_code(kernel_function.ast) ps.show_code(kernel_function.ast)
``` ```
%%%% Output: display_data %%%% Output: display_data
%%%% Output: display_data %%%% Output: display_data
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Compare this to the code above! It looks much simpler. The reason is that all index computations are already simplified since the exact field sizes and strides are known. This kernel now only works on arrays of the previously specified size. Compare this to the code above! It looks much simpler. The reason is that all index computations are already simplified since the exact field sizes and strides are known. This kernel now only works on arrays of the previously specified size.
Lets try what happens if we use a different array: Lets try what happens if we use a different array:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
src_arr2, dst_arr2 = np.random.rand(40, 40), np.zeros([40, 40]) src_arr2, dst_arr2 = np.random.rand(40, 40), np.zeros([40, 40])
try: try:
kernel_function(src=src_arr2, dst=dst_arr2) kernel_function(src=src_arr2, dst=dst_arr2)
except ValueError as e: except ValueError as e:
print(e) print(e)
``` ```
%%%% Output: stream %%%% Output: stream
Wrong shape of array dst. Expected (30, 30) Wrong shape of array dst. Expected (30, 30)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 1.2. GPU simulations ### 1.2. GPU simulations
Let's now jump to a seemingly unrelated topic: running kernels on the GPU. Let's now jump to a seemingly unrelated topic: running kernels on the GPU.
When creating the kernel, an additional parameter `target=ps.Target.GPU` has to be passed. Also, the compiled kernel cannot be called with numpy arrays directly, but has to be called with `pycuda.gpuarray`s instead. That means, we have to transfer our numpy array to GPU first. From this step we obtain a gpuarray, then we can run the kernel, hopefully multiple times so that the data transfer was worth the time. Finally we transfer the finished result back to CPU: When creating the kernel, an additional parameter `target=ps.Target.GPU` has to be passed. Also, the compiled kernel cannot be called with numpy arrays directly, but has to be called with `pycuda.gpuarray`s instead. That means, we have to transfer our numpy array to GPU first. From this step we obtain a gpuarray, then we can run the kernel, hopefully multiple times so that the data transfer was worth the time. Finally we transfer the finished result back to CPU:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
if pycuda: if pycuda:
config = ps.CreateKernelConfig(target=ps.Target.GPU) config = ps.CreateKernelConfig(target=ps.Target.GPU)
kernel_function_gpu = ps.create_kernel(update_rule, config=config).compile() kernel_function_gpu = ps.create_kernel(update_rule, config=config).compile()
# transfer to GPU # transfer to GPU
src_arr_gpu = pycuda.gpuarray.to_gpu(src_arr) src_arr_gpu = pycuda.gpuarray.to_gpu(src_arr)
dst_arr_gpu = pycuda.gpuarray.to_gpu(dst_arr) dst_arr_gpu = pycuda.gpuarray.to_gpu(dst_arr)
# run kernel on GPU, this is done many times in real setups # run kernel on GPU, this is done many times in real setups
kernel_function_gpu(src=src_arr_gpu, dst=dst_arr_gpu) kernel_function_gpu(src=src_arr_gpu, dst=dst_arr_gpu)
# transfer result back to CPU # transfer result back to CPU
dst_arr_gpu.get(dst_arr) dst_arr_gpu.get(dst_arr)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 1.3. Summary: manual way ### 1.3. Summary: manual way
- Don't confuse *pystencils* fields and *numpy* arrays - Don't confuse *pystencils* fields and *numpy* arrays
- fields are symbolic - fields are symbolic
- arrays are numeric - arrays are numeric
- Use the fixed-field-size workflow whenever possible, since code might be faster. Create arrays first, then create fields from arrays - Use the fixed-field-size workflow whenever possible, since code might be faster. Create arrays first, then create fields from arrays
- if we run GPU kernels, arrays have to transferred to the GPU first - if we run GPU kernels, arrays have to transferred to the GPU first
As demonstrated in the examples above we have to define 2 or 3 corresponding objects for each grid: As demonstrated in the examples above we have to define 2 or 3 corresponding objects for each grid:
- symbolic pystencils field - symbolic pystencils field
- numpy array on CPU - numpy array on CPU
- for GPU run also a pycuda.gpuarray to mirror the data on the GPU - for GPU run also a pycuda.gpuarray to mirror the data on the GPU
Managing these three objects manually is tedious and error-prone. We'll see in the next section how the data handling object takes care of this problem. Managing these three objects manually is tedious and error-prone. We'll see in the next section how the data handling object takes care of this problem.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 2. Introducing the data handling - serial version ## 2. Introducing the data handling - serial version
### 2.1. Example for CPU simulations ### 2.1. Example for CPU simulations
The data handling internally keeps a mapping between symbolic fields and numpy arrays. When we create a field, automatically a corresponding array is allocated as well. Optionally we can also allocate memory on the GPU for the array as well. Lets dive right in and see how our example looks like, when implemented with a data handling. The data handling internally keeps a mapping between symbolic fields and numpy arrays. When we create a field, automatically a corresponding array is allocated as well. Optionally we can also allocate memory on the GPU for the array as well. Lets dive right in and see how our example looks like, when implemented with a data handling.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh = ps.create_data_handling(domain_size=(30, 30)) dh = ps.create_data_handling(domain_size=(30, 30))
# fields are now created using the data handling # fields are now created using the data handling
src_field = dh.add_array('src', values_per_cell=1) src_field = dh.add_array('src', values_per_cell=1)
dst_field = dh.add_array('dst', values_per_cell=1) dst_field = dh.add_array('dst', values_per_cell=1)
# kernel is created just like before # kernel is created just like before
update_rule = [ps.Assignment(lhs=dst_field[0, 0], update_rule = [ps.Assignment(lhs=dst_field[0, 0],
rhs=(src_field[1, 0] + src_field[-1, 0] + src_field[0, 1] + src_field[0, -1]) / 4)] rhs=(src_field[1, 0] + src_field[-1, 0] + src_field[0, 1] + src_field[0, -1]) / 4)]
kernel_function = ps.create_kernel(update_rule).compile() kernel_function = ps.create_kernel(update_rule).compile()
# have a look at the generated code - it uses # have a look at the generated code - it uses
# the fast version where array sizes are compiled-in # the fast version where array sizes are compiled-in
# ps.show_code(kernel_function.ast) # ps.show_code(kernel_function.ast)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The data handling has methods to create fields - but where are the corresponding arrays? The data handling has methods to create fields - but where are the corresponding arrays?
In the serial case you can access them as a member of the data handling, for example to initialize our 'src' array we can write In the serial case you can access them as a member of the data handling, for example to initialize our 'src' array we can write
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
src_arr = dh.cpu_arrays['src'] src_arr = dh.cpu_arrays['src']
src_arr.fill(0.0) src_arr.fill(0.0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This method is nice and easy, but you should not use it if you want your simulation to run on distributed-memory clusters. We'll see why in the last section. So it is good habit to not access the arrays directly but use the data handling to do so. We can, for example, initialize the array also with the following code: This method is nice and easy, but you should not use it if you want your simulation to run on distributed-memory clusters. We'll see why in the last section. So it is good habit to not access the arrays directly but use the data handling to do so. We can, for example, initialize the array also with the following code:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh.fill('src', 0.0) dh.fill('src', 0.0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
To run the kernels with the same code as before, we would also need the arrays. We could do that accessing the `cpu_arrays`: To run the kernels with the same code as before, we would also need the arrays. We could do that accessing the `cpu_arrays`:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
kernel_function(src=dh.cpu_arrays['src'], kernel_function(src=dh.cpu_arrays['src'],
dst=dh.cpu_arrays['dst']) dst=dh.cpu_arrays['dst'])
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
but to be prepared for MPI parallel simulations, again a method of the data handling should be used for this. but to be prepared for MPI parallel simulations, again a method of the data handling should be used for this.
Besides, this method is also simpler to use - since it automatically detects which arrays a kernel uses and passes them in. Besides, this method is also simpler to use - since it automatically detects which arrays a kernel uses and passes them in.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dh.run_kernel(kernel_function) dh.run_kernel(kernel_function)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
To access the data read-only instead of using `cpu_arrays` the gather function should be used. To access the data read-only instead of using `cpu_arrays` the gather function should be used.
This function gives you a read-only copy of the domain or part of the domain. This function gives you a read-only copy of the domain or part of the domain.
We will discuss this function later in more detail when we look at MPI parallel simulations. We will discuss this function later in more detail when we look at MPI parallel simulations.
For serial simulations keep in mind that modifying the resulting array does not change your original data! For serial simulations keep in mind that modifying the resulting array does not change your original data!
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
read_only_copy = dh.gather_array('src', ps.make_slice[:, :], ghost_layers=False) read_only_copy = dh.gather_array('src', ps.make_slice[:, :], ghost_layers=False)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 2.2. Example for GPU simulations ### 2.2. Example for GPU simulations
In this section we have a look at GPU simulations using the data handling. Only minimal changes are required. In this section we have a look at GPU simulations using the data handling. Only minimal changes are required.
When creating the data handling we can pass a 'default_target'. This means for every added field an array on the CPU and the GPU is allocated. This is a useful default, for more fine-grained control the `add_array` method also takes additional parameters controlling where the array should be allocated. When creating the data handling we can pass a 'default_target'. This means for every added field an array on the CPU and the GPU is allocated. This is a useful default, for more fine-grained control the `add_array` method also takes additional parameters controlling where the array should be allocated.
Additionally we also need to compile a GPU version of the kernel. Additionally we also need to compile a GPU version of the kernel.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
if gpu is False: if gpu is False:
dh = ps.create_data_handling(domain_size=(30, 30), default_target=ps.Target.CPU) dh = ps.create_data_handling(domain_size=(30, 30), default_target=ps.Target.CPU)
else: else:
dh = ps.create_data_handling(domain_size=(30, 30), default_target=ps.Target.GPU) dh = ps.create_data_handling(domain_size=(30, 30), default_target=ps.Target.GPU)
# fields are now created using the data handling # fields are now created using the data handling
src_field = dh.add_array('src', values_per_cell=1) src_field = dh.add_array('src', values_per_cell=1)
dst_field = dh.add_array('dst', values_per_cell=1) dst_field = dh.add_array('dst', values_per_cell=1)
# kernel is created just like before # kernel is created just like before
update_rule = [ps.Assignment(lhs=dst_field[0, 0], update_rule = [ps.Assignment(lhs=dst_field[0, 0],
rhs=(src_field[1, 0] + src_field[-1, 0] + src_field[0, 1] + src_field[0, -1]) / 4)] rhs=(src_field[1, 0] + src_field[-1, 0] + src_field[0, 1] + src_field[0, -1]) / 4)]
config = ps.CreateKernelConfig(target=dh.default_target) config = ps.CreateKernelConfig(target=dh.default_target)
kernel_function = ps.create_kernel(update_rule, config