hyteg merge requests
https://i10git.cs.fau.de/hyteg/hyteg/-/merge_requests
2023-09-21T09:42:00+02:00
https://i10git.cs.fau.de/hyteg/hyteg/-/merge_requests/653
Draft: Particles
2023-09-21T09:42:00+02:00
Nils Kohl
Draft: Particles
This MR implements unresolved particles via the MESA-PD particle code generation framework.
The objective is the implementation of **fluid (<)-> particle** and later **particle <-> particle** interactions. Since the particles are assume...
This MR implements unresolved particles via the MESA-PD particle code generation framework.
The objective is the implementation of **fluid (<)-> particle** and later **particle <-> particle** interactions. Since the particles are assumed to be smaller than the elements, the module is referred to as _unresolved particles_. Thus fluid-particle interactions are not meant to be fully resolved but are modeled instead. At this point, only the one-way coupling (fluid acts on particles, not vice versa) is considered.
This is sufficient to, e.g., implement tracer particles.
Some notes:
* The implementation is already fully parallelized.
* There is a new tutorial (no. 12) including some images.
* The particle properties are subject to change and probably need to be discussed in the future. As of now, it is possible to flexibly add custom real and integer scalars to particles if needed.
* Direct application of velocity and force fields is possible. For tracer particles, the former is sufficient and can be interpreted as a extremely simplified form of FaxĂ©n's law (https://en.wikipedia.org/wiki/Fax%C3%A9n%27s_law). Force fields are also already supported, and naturally involve the particle mass for the computation of the resulting translational velocity.
* Theoretically, all features that are available in MESA-PD can be used. This includes, for instance, other integrators. However, using them involves generating the corresponding C++ code via MESA-PD through the python script in the `data/codegen/unresolved_particles/` directory.
* I had in mind that there has been a temporary(?) tracer particle implementation in the TerraNeo module. I cannot find it right now (could be on a branch) - and I might be wrong. If there is one, we should consider replacing it (and add required features to this new version - if possible).
* Blended domains are not yet supported.
Nils Kohl
Nils Kohl
https://i10git.cs.fau.de/hyteg/hyteg/-/merge_requests/649
Draft: To GeometryMap add a method to compute second derivatives
2023-09-14T16:10:30+02:00
Marcus Mohr
Draft: To GeometryMap add a method to compute second derivatives
Main purpose of this MR is to add to our geometry maps the functionality to compute second derivaties as is e.g. needed for SUPG stabilisiation.
Main purpose of this MR is to add to our geometry maps the functionality to compute second derivaties as is e.g. needed for SUPG stabilisiation.
Andreas Burkhart
Andreas Burkhart
https://i10git.cs.fau.de/hyteg/hyteg/-/merge_requests/642
First version of parallel PrimitiveStorage setup from file.
2023-09-05T12:13:51+02:00
Nils Kohl
First version of parallel PrimitiveStorage setup from file.
This MR realizes a two-step process. First, in a serial (offline) run, the SetupPrimitiveStorage is built and passed to a load balancer, followed by serialization to a file. During the actual, parallel run, the file is read via MPIIO suc...
This MR realizes a two-step process. First, in a serial (offline) run, the SetupPrimitiveStorage is built and passed to a load balancer, followed by serialization to a file. During the actual, parallel run, the file is read via MPIIO such that only the portion that is necessary has to be viewed by the respective process. From that binary string, the entire PrimitiveStorage is created in parallel.
There are numerous possible optimizations - most notably, it is not clear to me right now if the current approach of simply writing all neighbor primitives to file is preferable, or if alternatively, such data should be communicated after only reading the locally allocated primitives. Maybe it does not even make a difference at all. Also, this implementation has not been tested with large setups yet.
Nils Kohl
Nils Kohl
https://i10git.cs.fau.de/hyteg/hyteg/-/merge_requests/632
Draft: Convection Model Operators, SUPG Stabilisation Operators, Elementwise ...
2023-09-27T18:02:25+02:00
Andreas Burkhart
Draft: Convection Model Operators, SUPG Stabilisation Operators, Elementwise DoFValueOperators (+Tests), Second derivatives for the Blending Maps, PETScHDF5 Function Saving, Projection subclasses for Smoothers and Restriction/Prolongation
**Preface: Our pipeline seems to be broken right now, so we should probably wait until it works again.**
This merge request contains several new operators and features aimed at the TerraNeo application.
A few comments on the content o...
**Preface: Our pipeline seems to be broken right now, so we should probably wait until it works again.**
This merge request contains several new operators and features aimed at the TerraNeo application.
A few comments on the content of the various diffs:
**composites/P2P1TaylorHoodCompStokesOperator.hpp:** Implements a P2P1 Block Operator for the compressible Stokes (both ALA and TALA are available) with potentially temperature dependent viscosity. It can also generate an appropriate right hand side. You can also activate or deactivate the frozen velocity approach (i.e. if the compressible term in the mass conservation equation is made explicit).
LHS velocity:
$$2 \int_{\Omega} \eta(x,T) \sigma(u) : \sigma(v) \text{ } dx - \frac{2}{\text{dim}} \int_{\Omega}{\eta(x,T) (\nabla \cdot u) \cdot (\nabla \cdot v)} \text{ } dx - \int_{\Omega} \text{div}(v)p_d \text{ } dx - \int_\Omega \frac{\text{Di}}{\Gamma_0} \rho(x) (K_T)^{-1} p_d g \cdot v \text{ } dx$$
RHS velocity:
$$-\int_\Omega \frac{\text{Ra}}{\text{Pe}} \rho(x) \alpha T_d g \cdot v \text{ } dx$$
LHS pressure (here without frozen velocity):
$$ -\int_\Omega \left (\frac{\nabla \rho(x)}{\rho(x)} \cdot u \right ) q \text{ } dx - \int_\Omega (\nabla \cdot u) q \text{ } dx$$
RHS pressure: 0
**src/p2functionspace/P2CompTransportOperator.hpp:** Implements a P2 Operator for the time (in-)dependent energy conservation equation with optional SUPG Stabilisation. Various time stepping schemes (implicit Euler, BDF2, Crank-Nicolson) are available (some additional function calls from outside might be required, e.g. the operator provides a method to save the previous time step right hand side for crank nicolson). Can also be used for the diffusion solve step when using the MMOC operator splitting approach. It can also generate an appropriate right hand side.
LHS temperature (here for the adiabatic heat on the lhs, but can be also be made explicit):
$$ \int_\Omega \frac{\partial T}{\partial t} w \text{ } dx + \int_\Omega (u \cdot \nabla T) w \text{ } dx + \int_\Omega \frac{k }{\text{Pe} \text{ } C_p}\nabla T \cdot \nabla \left ( \frac{w}{\rho(x)} \right ) \text{ } dx - \int_\Omega \text{Di} \text{ } \frac{\alpha}{C_p} T (u \cdot g )w \text{ } dx + \text{SUPG Terms (if used)}$$
RHS temperature:
$$ - \int_\Omega \frac{1}{C_p} H w \text{ } dx -\int_\Omega \frac{\text{Pe}}{\rho(x) C_p}\frac{\text{Di}}{\text{Ra}} \left (\tau\left (u,\eta\left (x,T \right )\right ) : {\varepsilon(u)}\right ) w \text{ } dx + \text{SUPG Terms (if used)}$$
The lhs and rhs above are consequently also dependent on the chosen time discretisation.
**src/hyteg/p2functionspace/P2FullViscousTDependentOperator.hpp, src/hyteg/p2functionspace/P2GradRhoRho_P2_Operator.hpp, src/hyteg/p2functionspace/P2ScaledMassDiffusionOperator.hpp, src/hyteg/p2functionspace/P2VariableBlendingKMassScalarToVectorOperator.hpp, src/hyteg/gridtransferoperators/P2toP1GradRhoRho_P2_Operator.hpp, src/hyteg/gridtransferoperators/P1toP2VariableBlendingKMassScalarToVectorOperator.hpp:** Various operators that conveniently combine the usage of multiple other operators (e.g. if we apply one on every component).
**src/elementwiseoperators:** Added all combinations of P1/P2 ElementwiseDoFValueOperator classes. These operators can be inherited from and provide a way (via the DoFValues_ vector) to access the DoFs of another FEM function during the evaluation of a form. This allows us for the creation of operators like an advection operator that takes a velocity field as an additional input.
Added a setConstantScaling function to all elementwise operators. This can be used to scale your operators by a constant on the fly (without having to create for example a linear combination form).
There are some bug fixes and the ability to use a custom form for P2ToP1 and P1ToP2 elementwise operators included.
**src/elementwiseoperators/generated:** Folder containing all of the generated ElementwiseDoFValueOperators. In particular it adds operators for adiabatic heating, advection, diffusion (only supg), temperature dep. stokes, the grad(rho)/rho * u compressible term in the mass conservation equation, div_k_grad scaled by rho^-1, mass(only supg), shear heat and respective SUPG Stabilisation terms.
**src/p1functionspace/VertexDoFFunction, src/edgedofspace/EdgeDoFFunction, src/functions/DoFValueFunctions:** Added a DoFValueFunction class that both VertexDoFFunction and EdgeDoFFunction inherit from. No previous functionality of VertexDoFFunction and EdgeDoFFunction was changed, I just implemented the additional functions defined in the DoFValueFunction base class. The defined functions are required for the DoFValueOperators to work.
**src/hyteg/composites/VelocityOperator_T_Wrapper, src/hyteg/solvers/SubstituteSolver.hpp:** Some convenient wrappers, allowing you to wrap a operator with a different VelocityOperator_T type and to use a solver for a different operator type (e.g. if you want to solve Laplace as a preconditioner, but the preconditioner is expected to be a solver for a different operator).
**src/petsc/PETScHDF5FunctionSave.hpp:** Functions that allow you to save functions and paramters via creating a PETSc Vector and saving it in parallel to a binary file via HDF5. Has some limitations since you can only reload your functions in the same configuration as you saved it.
**src/solvers/preconditioners/stokes:** Two preconditioner classes used for benchmarks and the convection model
**src/solvers/solvertemplates/StokesSolverTemplates.hpp:** Pulled over some changes to the StokesSolverTemplates solver used in the other Terraneo branches. No functionality is changed but you can now distinguish between absolute and relative tolerance.
**src/solvers/ChebyshevSmoother.hpp:** Added a subclass of the Chebyshev Smoother that contains a normal projection at an appropriate spot for the freeslip boundary conditions.
**src/solvers/GMRESSolver.hpp:** Serveral bug fixes and additional functions from Andreas Wagner to the GMRES solver which we found during our testing of the GMRES.
**src/solvers/MinResSolver.hpp:** Pulled over some changes to the MinRes solver used in the other Terraneo branches. No functionality is changed but you can now define an absolute tolerance.
**src/solvers/UzawaSmoother.hpp, /src/hyteg/composites/P2P1UzawaDampingFactorEstimationOperator.hpp:** Added functions for the estimation of the Uzawa Omega paramter also in case of the P2CompStokesOperator and added a subclass of the UzawaSmoother that contains a normal projection at an appropriate spot for the freeslip boundary conditions.
**src/hyteg/gridtransferoperators/P2P1StokesToP2P1StokesProlongation.hpp, src/hyteg/gridtransferoperators/P2P1StokesToP2P1StokesRestriction.hpp:** Restriction and Prolongation operator subclasses that contains a normal projection at an appropriate spot for the freeslip boundary conditions.
**data/meshes:** Just some additional meshes which are offset from the origin (are used for testing with the sphericalCoordinateMap blending map.
**src/forms:** Just some additional generated blending forms.
**src/geometry:** Added second derivative functions (needed for SUPG Stabilisiation) for all blending maps and a new spherical coordinate map.
**tests/hyteg/operators:** Added tests checking if the various DoFValueOperators are calculating the correct integrals and if the diagonal calculation for the temperature dependent stokes works.
Hopefully I haven't forgotten anything and these are all changes.
Andreas Burkhart
Andreas Burkhart
https://i10git.cs.fau.de/hyteg/hyteg/-/merge_requests/601
Draft: Access to individual DoFs
2023-05-24T15:56:56+02:00
Marcus Mohr
Draft: Access to individual DoFs
# Idea
The idea of this suggested extension is to allow read/write access to individual degrees of freedom of a FE-function from the outside. IMHO this could provide an easy and practical way for users to implement functionality that is ...
# Idea
The idea of this suggested extension is to allow read/write access to individual degrees of freedom of a FE-function from the outside. IMHO this could provide an easy and practical way for users to implement functionality that is currently not (directly) representable by the available function class methods. We see that in these cases either existing methods, such as e.g. `interpolate()`, are creatively (mis)used to accomplish the respective goal, or needlessly expensive workarounds, such as e.g. `velocityMaxMagnitude()` from `CFDHelpers.cpp` need to be implemented.
# DoFAccessors
In the current proof-of-concept implementation access to a degree of freedom is accomplished via two templated free-functions
- `readDoFValue()`
- `writeDoFValue()`
The rationale for not *just* providing a function such as e.g. `accessDoF()`, which returns a reference to the data item, is to be able to implement the two functions also for our `CSFVectorFunction`s, as in that case no underlying dof-vector exists, which could be referenced.
# DoFIdentifiers
A DoF is identified by a corresponding *DoFIdentifier* object. In the case of a `P1DoFIdentifier` this, e.g., is a class storing the following
three pieces of information: level to work on, ID of associated primitive and the integer index into the data array for the function.
```c++
class P1DoFIdentifier : public BaseDoFIdentifier
{
public:
P1DoFIdentifier(){};
~P1DoFIdentifier(){};
PrimitiveID primitiveID_;
uint_t level_;
uint_t arrayIndex_;
bool operator==( const P1DoFIdentifier& other ) const
{
return ( primitiveID_ == other.primitiveID_ ) && ( arrayIndex_ == other.arrayIndex_ ) && ( level_ == other.level_ );
}
};
```
# DoFIterators
In order for the user to allow to work with the accessors, the idea is to have **iterators** which can be used to "loop" over all degrees of freedom of a specific function class. Note that this is local, i.e. the iterator only treats dofs/primitives owned by the corresponding MPI process. Currently the draft implements to iterators:
- `P1DoFIterator`
- `EdgeDoFIterator`
In order to demonstrate the validity and usefulness of the concept there are two tests:
- `DoFIterator+AccessTest.cpp`: demonstrates the basic use of the concept and checks that we can iterate over all DoFs (externally)
- `CSFVectorFunctionMagnitudeTest.cpp`: Implements the `getMaxMagnitude()` functionality that is currently missing from the `CSFVectorFunction` class using the dof accessor idea.
# Remarks
- Note that this is only a demo implementation. Hence there is much room for improvement. Especially the iterators should be improved, if we want to go with this approach.
- Another questions is, whether an iterator should also provide a flag to only include certain primitives, in the same way as e.g. the `interpolate()` and `apply(()` methods.
- For full usability one would often also need to obtain the micro-coordinates for a DoF (if avalaible for the corresponding FE function representation). This could be handled in multiple ways
- as part of a specialised iterator that adds this info to the dof identifier
- by a separate query method
- as a secondary `readDoFValue()` function that also returns the coordinates
Please feel free to comment.
Marcus Mohr
Marcus Mohr