pystencils merge requestshttps://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests2019-07-18T11:56:29+02:00https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/7Add `pystencils.make_python_function` used for KernelFunction.compile2019-07-18T11:56:29+02:00Stephan SeitzAdd `pystencils.make_python_function` used for KernelFunction.compile`KernelFunction.compile = None` is currently set by the
`create_kernel` function of each respective backend as partial function
of `<backend>.make_python_function`.
The code would be clearer with a unified `make_python_function`.
`Kerne...`KernelFunction.compile = None` is currently set by the
`create_kernel` function of each respective backend as partial function
of `<backend>.make_python_function`.
The code would be clearer with a unified `make_python_function`.
`KernelFunction.compile` can then be implemented as a call to this
function with the respective backend.https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/22Add AssignmentCollection.has_exclusive_writes2019-08-07T11:43:33+02:00Stephan SeitzAdd AssignmentCollection.has_exclusive_writesAn assumption of pystencils is that output stencil writes never overlap.
This allows massive parallelization without race conditions or atomics.
When I use my autodiff transformations I use this condition to check
whether the assumption...An assumption of pystencils is that output stencil writes never overlap.
This allows massive parallelization without race conditions or atomics.
When I use my autodiff transformations I use this condition to check
whether the assumption still hold for the backward assignments.https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/2Add autodiff2019-07-11T12:12:00+02:00Stephan SeitzAdd autodiffDraft for minimal integration of automatic differentiation. Tensorflow and Torch backend were removed (apart from AutoDiffOp.create_tensorlfow())
Only tests without LBM or tf/torch dependencies have been added. This implies that also nu...Draft for minimal integration of automatic differentiation. Tensorflow and Torch backend were removed (apart from AutoDiffOp.create_tensorlfow())
Only tests without LBM or tf/torch dependencies have been added. This implies that also numeric gradient checking is missing (depends on either tf or torch). Should we really move the backends into separate modules?
Apart from adding auto-differentiation functionality I added two more changes. Would be happy to AutoDiffOp.create_tensorflow() after feedback.https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/112Add CI minimal CI test for old sympy2019-12-17T18:51:26+01:00Stephan SeitzAdd CI minimal CI test for old sympyThe minimal test cannot catch everything but its something.The minimal test cannot catch everything but its something.https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/67Add ConditionalFieldAccess (Field.Access after out-of-bounds check)2019-10-01T15:36:36+02:00Stephan SeitzAdd ConditionalFieldAccess (Field.Access after out-of-bounds check)Adds a wrapper around a `Field.Access` that the access is only performed if a certain condition is met.
If I use this, I can safely perform calculations and adjoint calculations with `ghost_layers=0` and obtain the correct gradients w...Adds a wrapper around a `Field.Access` that the access is only performed if a certain condition is met.
If I use this, I can safely perform calculations and adjoint calculations with `ghost_layers=0` and obtain the correct gradients without separate boundary handling.https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/10Add CustomSympyPrinter._print_Sum2019-08-05T16:44:54+02:00Stephan SeitzAdd CustomSympyPrinter._print_SumThis makes sympy.Sum printable as instantaniously invoked lambda (Attention: C++-only, works in CUDA)This makes sympy.Sum printable as instantaniously invoked lambda (Attention: C++-only, works in CUDA)https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/26Add pystencils-autodiff2019-08-09T08:54:59+02:00Stephan SeitzAdd pystencils-autodiffThis adds pystencils_autodiff (https://pypi.org/project/pystencils-autodiff/0.1.3/) to pystencils.
After installing the extension, you can access all its classes in the submodule `pystenicls.autodiff`.
If it's not installed but you t...This adds pystencils_autodiff (https://pypi.org/project/pystencils-autodiff/0.1.3/) to pystencils.
After installing the extension, you can access all its classes in the submodule `pystenicls.autodiff`.
If it's not installed but you try to import it you get an error with installation instructions.
The internal code of pystencils_autodiff is still very ugly.
I hope I can clean it up in the next days/weeks.https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/229Add type conversion for SP types2021-04-03T06:01:47+02:00Markus HolzerAdd type conversion for SP typesIf Assignments are already typed for double-precision but the kernel is created for single-precision the assignments should be adapted.If Assignments are already typed for double-precision but the kernel is created for single-precision the assignments should be adapted.Markus HolzerMarkus Holzerhttps://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/144Add TypedMatrixSymbol (for usage of `MatrixSymbol` in kernels)2020-02-21T15:16:29+01:00Stephan SeitzAdd TypedMatrixSymbol (for usage of `MatrixSymbol` in kernels)I don't know whether this is a good idea but SymPy supports assigning MatrixSymbols. Like
```python
>>>A = MatrixSymbols('A', 3, 3)
>>>B = MatrixSymbols('B', 3, 3)
In [12]: pystencils.Assignment(A, B) ...I don't know whether this is a good idea but SymPy supports assigning MatrixSymbols. Like
```python
>>>A = MatrixSymbols('A', 3, 3)
>>>B = MatrixSymbols('B', 3, 3)
In [12]: pystencils.Assignment(A, B)
Out[12]: A := B
```
With this hack I can generate code like this:
```cpp
#define FUNC_PREFIX static
2 FUNC_PREFIX void kernel(float * RESTRICT _data_y, int64_t const _size_y_0, int64_t const _size_y_1, int64_t const _size_y_2, int64_t
const _stride_y_0, int64_t const _stride_y_1, int64_t const _stride_y_2, std::function< Vector3 < double >(int, int, int) > my_fun)
3 {
4 for (int ctr_0 = 0; ctr_0 < _size_y_0; ctr_0 += 1)
5 {
6 float * RESTRICT _data_y_00 = _data_y + _stride_y_0*ctr_0;
7 for (int ctr_1 = 0; ctr_1 < _size_y_1; ctr_1 += 1)
8 {
9 float * RESTRICT _data_y_00_10 = _stride_y_1*ctr_1 + _data_y_00;
10 for (int ctr_2 = 0; ctr_2 < _size_y_2; ctr_2 += 1)
11 {
12 const Vector3<double> A = my_fun(ctr_0, ctr_1, ctr_2);
13 _data_y_00_10[_stride_y_2*ctr_2] = A[0] + A[1] + A[2];
14 }
15 }
16 }
17 }
1 #define FUNC_PREFIX static
2 template <class Functor_T>
3 FUNC_PREFIX void kernel(float * RESTRICT _data_y, int64_t const _size_y_0, int64_t const _size_y_1, int64_t const _size_y_2, int64_t
const _stride_y_0, int64_t const _stride_y_1, int64_t const _stride_y_2, Functor_T my_fun)
4 {
5 for (int ctr_0 = 0; ctr_0 < _size_y_0; ctr_0 += 1)
6 {
7 float * RESTRICT _data_y_00 = _data_y + _stride_y_0*ctr_0;
8 for (int ctr_1 = 0; ctr_1 < _size_y_1; ctr_1 += 1)
9 {
10 float * RESTRICT _data_y_00_10 = _stride_y_1*ctr_1 + _data_y_00;
11 for (int ctr_2 = 0; ctr_2 < _size_y_2; ctr_2 += 1)
12 {
13 const Vector3<double> A = my_fun(ctr_0, ctr_1, ctr_2);
14 _data_y_00_10[_stride_y_2*ctr_2] = A[0] + A[1] + A[2];
15 }
16 }
17 }
18 }
```
from
```python
x, y = pystencils.fields('x, y: float32[3d]')
from pystencils.data_types import TypedMatrixSymbol
A = TypedMatrixSymbol('A', 3, 1, create_type('double'), 'Vector3<double>')
my_fun_call = DynamicFunction(TypedSymbol('my_fun',
'std::function< Vector3 < double >(int, int, int) >'),
A.dtype,
*pystencils.x_vector(3))
assignments = pystencils.AssignmentCollection({
A: my_fun_call,
y.center: A[0] + A[1] + A[2]
})
ast = pystencils.create_kernel(assignments)
pystencils.show_code(ast, custom_backend=FrameworkIntegrationPrinter())
```https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/54Always use codegen.rewriting.optimize2019-09-23T13:38:39+02:00Stephan SeitzAlways use codegen.rewriting.optimizePretty much !34 but with the changes to `create_kernel`. Can be closed if not wanted. Leaving it here for archiving purposes.
!34 has the workflow:
```python
assignments = optimize(assignments, optimizations)
ast = create_kernel(...Pretty much !34 but with the changes to `create_kernel`. Can be closed if not wanted. Leaving it here for archiving purposes.
!34 has the workflow:
```python
assignments = optimize(assignments, optimizations)
ast = create_kernel(assignments)
```https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/8Auto-format pystencils/rng.py (trailing whitespace)2019-07-18T10:28:27+02:00Stephan SeitzAuto-format pystencils/rng.py (trailing whitespace)My editor feels better if that whitespace is not there.My editor feels better if that whitespace is not there.https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/17Fix #10: Add jinja2 to pystencils's dependencies2019-08-06T08:05:38+02:00Stephan SeitzFix #10: Add jinja2 to pystencils's dependenciesAlternative would be remove jinja2 (see other PR).
However, I think dependency on jinja2 is not to heavy.
This could make some implementations more elegant.Alternative would be remove jinja2 (see other PR).
However, I think dependency on jinja2 is not to heavy.
This could make some implementations more elegant.https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/305Fix #622022-10-21T09:24:20+02:00Markus HolzerFix #62Fixes problems around #62Fixes problems around #62Markus HolzerMarkus Holzerhttps://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/6Fix deprecation warning: `collections.abc` instead of `abc`2019-07-10T17:04:10+02:00Stephan SeitzFix deprecation warning: `collections.abc` instead of `abc`DeprecationWarning: Using or importing the ABCs from 'collections'
instead of from '' is deprecated, and in 3.8 it will stop workingDeprecationWarning: Using or importing the ABCs from 'collections'
instead of from '' is deprecated, and in 3.8 it will stop workinghttps://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/150Fix import: sympy.numbers -> sympy.core.numbers2020-03-24T00:57:30+01:00Stephan SeitzFix import: sympy.numbers -> sympy.core.numbersApparently `sympy` no longer exports `sympy.numbers` directly.Apparently `sympy` no longer exports `sympy.numbers` directly.https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/100Fix Opencl and LLVM GPU tests2019-12-05T11:02:49+01:00Stephan SeitzFix Opencl and LLVM GPU testsFix tests for LLVM GPU and OpenCL
- !96 made it impossible to print functions without names (only important for LLVM GPU test)
- !87 made it impossible to run OpenCL kernels on CUDA OpenCL `int(...)`. is not a valid cast for it
- Sy...Fix tests for LLVM GPU and OpenCL
- !96 made it impossible to print functions without names (only important for LLVM GPU test)
- !87 made it impossible to run OpenCL kernels on CUDA OpenCL `int(...)`. is not a valid cast for it
- SymPy moved `sympy.boolalg` to `sympy.logic.boolalg`https://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/237Fix Sympy pipeline2021-04-26T16:46:20+02:00Markus HolzerFix Sympy pipelineFix #35Fix #35Markus HolzerMarkus Holzerhttps://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/270Fixed kernel_decorator with config parameter2021-11-03T22:23:36+01:00Jan HönigFixed kernel_decorator with config parameterThe current kernel decorator does not work properly with the introduced `CreateKernelConfig`.
This MR fixes that.The current kernel decorator does not work properly with the introduced `CreateKernelConfig`.
This MR fixes that.Jan HönigJan Hönighttps://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/11Fixup for DestructuringBindingsForFieldClass2019-07-18T10:28:09+02:00Stephan SeitzFixup for DestructuringBindingsForFieldClass- rename header Field.h is not a unique name in waLBerla context
- add PyStencilsField.h
- bindings were lacking data type- rename header Field.h is not a unique name in waLBerla context
- add PyStencilsField.h
- bindings were lacking data typehttps://i10git.cs.fau.de/pycodegen/pystencils/-/merge_requests/13implemented derivation of gradient weights via rotation2019-08-03T14:01:23+02:00Markus Holzerimplemented derivation of gradient weights via rotationderive gradient weights of other direction with
already calculated weights of one direction
via rotation and apply them to a field.derive gradient weights of other direction with
already calculated weights of one direction
via rotation and apply them to a field.