### Syntax highlighting for code blocks

parent 0cbc9eee
 ... ... @@ -22,7 +22,7 @@ For interactive devolvement, the next section can be written in a this article. \code \code{.py} u, u_tmp = ps.fields("u, u_tmp: [2D]", layout='fzyx') kappa = sp.Symbol("kappa") dx = sp.Symbol("dx") ... ... @@ -31,7 +31,7 @@ dt = sp.Symbol("dt") With the pystencils buildings blocks, we can directly define the time and spatial derivative of the PDE. \code \code{.py} heat_pde = ps.fd.transient(u) - kappa * ( ps.fd.diff( u, 0, 0 ) + ps.fd.diff( u, 1, 1 ) ) \endcode ... ... @@ -42,7 +42,7 @@ Printing heat_pde inside a Jupyter notebook shows the equation as: Next, the PDE will be discretized. We use the Discretization2ndOrder class to apply finite differences discretization to the spatial components, and explicit Euler discretization for the time step. \code \code{.py} discretize = ps.fd.Discretization2ndOrder(dx=dx, dt=dt) heat_pde_discretized = discretize(heat_pde) \endcode ... ... @@ -55,7 +55,7 @@ Printing heat_pde_discretized reveals This equation can be simplified by combining the two fractions on the right-hand side. Furthermore, we would like to pre-calculate the division outside the loop of the compute kernel. To achieve this, we will first apply the simplification functionality of sympy, and then replace the division by introducing a subexpression. \code \code{.py} heat_pde_discretized = heat_pde_discretized.args[1] + heat_pde_discretized.args[0].simplify() @ps.kernel ... ... @@ -85,7 +85,7 @@ We will now use the waLBerla build system to generate a sweep from this symbolic We create a python file called *HeatEquationKernel.py* in our application folder. This file contains the python code we have developed above. Additionally, to sympy and pystencils, we add the import directive from pystencils_walberla import CodeGeneration, generate_sweep. At the end of the file, we add these two lines: \code \code{.py} with CodeGeneration() as ctx: generate_sweep(ctx, 'HeatEquationKernel', ac) \endcode ... ... @@ -94,7 +94,7 @@ The CodeGeneration context and the function generate_sweep are provided by w The code generation script will later be called by the build system while compiling the application. The complete script looks like this: \code \code{.py} import sympy as sp import pystencils as ps from pystencils_walberla import CodeGeneration, generate_sweep ... ... @@ -124,7 +124,7 @@ with CodeGeneration() as ctx: \endcode As a next step, we register the script with the CMake build system. Outside of our application folder, open *CMakeLists.txt* and add these lines (replace codegen by the name of your folder): \code \code{.unparsed} if( WALBERLA_BUILD_WITH_CODEGEN ) add_subdirectory(codegen) endif() ... ... @@ -132,7 +132,7 @@ endif() The if block makes sure our application is only built if the CMake flag WALBERLA_BUILD_WITH_CODEGEN is set. In the application folder, create another *CMakeLists.txt* file. For registering a code generation target, the build system provides the walberla_generate_target_from_python macro. Apart from the target name, we need to pass it the name of our python script and the names of the generated C++ header and source files. Their names need to match the class name passed to generate_sweep in the script. Add the following lines to your *CMakeLists.txt*. \code \code{.unparsed} if( WALBERLA_BUILD_WITH_CODEGEN ) walberla_generate_target_from_python( NAME CodegenHeatEquationKernel FILE HeatEquationKernel.py ... ... @@ -148,7 +148,7 @@ When running make again at a later time, the code will only be regenerated if Finally, we can use the generated sweep in an actual waLBerla application. In the application folder, create the source file *01_CodegenHeatEquation.cpp*. Open *CMakeLists.txt* and register the source file as an executable using the macro walberla_add_executable. Add all required waLBerla modules as dependencies, as well as the generated target. \code \code{.unparsed} walberla_add_executable ( NAME 01_CodegenHeatEquation FILES 01_CodegenHeatEquation.cpp DEPENDS blockforest core field stencil timeloop vtk pde CodegenHeatEquationKernel ) ... ...
 ... ... @@ -22,7 +22,7 @@ In the code generation python script, we first require a few imports from lbmpy From the lbmpy.creationfunctions we require the functions to create collision and update rules. For the actual code generation, generate_lattice_model from lbmpy_walberla is required. Since we will define symbols, SymPy is also needed. \code \code{.py} import sympy as sp from lbmpy.creationfunctions import create_lb_collision_rule, create_lb_update_rule ... ... @@ -32,7 +32,7 @@ from lbmpy_walberla import generate_lattice_model \endcode First, we define a few general parameters. These include the stencil (D2Q9) and the memory layout (fzyx, see \ref tutorial_codegen01 ). We define a SymPy symbol for the relaxation rate \f$\omega \f$. This means we can later set it to a specific value from the waLBerla code. A dictionary with optimization parameters is also set up. Here, we enable global common subexpression elimination (cse_global) and set the PDF field's memory layout. \code \code{.py} stencil = 'D2Q9' omega = sp.Symbol('omega') layout = 'fzyx' ... ... @@ -45,7 +45,7 @@ Next, we set the parameters for the SRT method in a dictionary and create both t The update rule is still needed in the code generation process; namely for the pack info generation. The collision step only acts within one cell. Thus, the collision rule's equations contain no neighbour accesses. Calling create_lb_update_rule inserts the two-fields pull scheme as generate_lattice_model, and resulting update rule contains exactly those neighbour accesses which are required for generate_pack_info_from_kernel to build the optimized pack info. \code \code{.py} srt_params = {'stencil': stencil, 'method': 'srt', 'relaxation_rate': omega} ... ... @@ -56,7 +56,7 @@ srt_update_rule = create_lb_update_rule(collision_rule=srt_collision_rule, optim Finally, we create the code generation context and call the respective functions for generating the lattice model and the pack info. Both require the context and a class name as parameters. To generate_lattice_model, we also pass the collision rule and the field layout; generate_pack_info_from_kernel receives the update rule. \code \code{.py} with CodeGeneration() as ctx: generate_lattice_model(ctx, "SRTLatticeModel", srt_collision_rule, field_layout=layout) generate_pack_info_from_kernel(ctx, "SRTPackInfo", srt_update_rule) ... ... @@ -68,7 +68,7 @@ Furthermore, if we optimise the waLBerla for the machine, it is compiled on with As a final touch, we still need to set up the CMake build target for the code generation script. This time, two distinct classes (the lattice model and the pack information) will be generated. Therefore, we need to list the header and source file names for both classes separately. \code \code{.unparsed} walberla_generate_target_from_python( NAME 02_LBMLatticeModelGenerationPython FILE 02_LBMLatticeModelGeneration.py OUT_FILES SRTLatticeModel.cpp SRTLatticeModel.h ... ...
 ... ... @@ -19,7 +19,7 @@ For the stream-pull-collide type kernel, we need two PDF fields which we set up For VTK output and the initial velocity setup, we define a velocity vector field as an output field for the LB method. \code \code{.py} stencil = 'D2Q9' omega = sp.Symbol('omega') layout = 'fzyx' ... ... @@ -40,7 +40,7 @@ optimization = {'cse_global': True, We set up the cumulant-based MRT method with relaxation rates as described above. We use generate_lb_update_rule from lbmpy to derive the set of equations describing the collision operator together with the *pull* streaming pattern. These equations define the entire LBM sweep. \code \code{.py} lbm_params = {'stencil': stencil, 'method': 'mrt_raw', 'relaxation_rates': [0, 0, 0, omega, omega, omega, 1, 1, 1], ... ... @@ -56,7 +56,7 @@ lbm_method = lbm_update_rule.method In \ref tutorial_codegen02, we were able to use the framework built around the waLBerla lattice model template API for setting up the shear flow's initial velocity profile. Since we are not using a lattice model class this time, this API is not available to us. With lbmpy, though, we can generate a kernel which takes in scalar values or fields for the initial density and velocity and sets the initial PDF values to the corresponding equilibrium. The function macroscopic_values_setter from lbmpy.macroscopic_value_kernels returns a set of assignments for this initialization procedure. It takes the LB method definition as an argument, as well as either symbols or pystencils field accesses for the initial density rho and the initial velocity. Lastly, it takes the PDF field's centre vector as the destination for the PDF values. We define a separate symbol for the density and use the velocity field defined above. \code \code{.py} initial_rho = sp.Symbol('rho_0') pdfs_setter = macroscopic_values_setter(lbm_method, ... ... @@ -74,7 +74,7 @@ Several functions from pystencils_walberla and lbmpy_walberla are called to - The PDF initialization kernel is generated from the pdfs_setter assignment collection using generate_sweep. - Using generate_boundary, we generate an optimised implementation of a NoSlip boundary handler for the domain's walls. \code \code{.py} with CodeGeneration() as ctx: if ctx.cuda: target = 'gpu' ... ...
 ... ... @@ -45,7 +45,7 @@ using math::uintMSBPosition; /*! * \brief Returns a string that stores the bitwise representation of 'value' (must be an unsigned integer) * * \code * \code{.unparsed} * 8bit display: 0101_1101 * 16bit display: 1110_0101.1100_0001 * 32bit display: 1000_0011.0110_1101.0000_0001.1010_0110 ... ...
 ... ... @@ -31,7 +31,7 @@ namespace grid_generator { /// Helper class to generate points in a simple cubic structure within a certain domain. /// The lattice is fixed by a point of reference (x). /// \code /// \code{.unparsed} /// . . . . . . . . /// +-----+ /// . . . .|. . .|. ... ...
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!