hyteg issueshttps://i10git.cs.fau.de/hyteg/hyteg/-/issues2024-03-08T08:26:10+01:00https://i10git.cs.fau.de/hyteg/hyteg/-/issues/253Scaled Apply + GEMV2024-03-08T08:26:10+01:00Nils KohlScaled Apply + GEMVThe proposal of scaled operators in !703 initiated this issue.
Having the standard BLAS level 2 routine `gemv` (see e.g. [here](https://spec.oneapi.io/versions/latest/elements/oneMKL/source/domains/blas/gemv.html))
```math
y \gets \alph...The proposal of scaled operators in !703 initiated this issue.
Having the standard BLAS level 2 routine `gemv` (see e.g. [here](https://spec.oneapi.io/versions/latest/elements/oneMKL/source/domains/blas/gemv.html))
```math
y \gets \alpha A x + \beta y
```
as part of the `Operator` interface would be great for obvious reasons.
What we currently can do is
```math
y \gets A x + \gamma y
```
where $\gamma \in \{0, 1\}$ (a.k.a., `Replace` and `Add`).
Implementing the full `gemv` would be the optimal fix, however, since many of the matrix-free operator implementations work in an "additive" fashion (particularly all elementwise operators including the new generated versions) I do not see an obvious way around having to touch $y$ (i.e., the "dst" vector) *twice* to apply scaling with $\beta$. If there is no obvious solution, we are forced to first scale $y$ (if $\beta \neq 0$) and then perform a scaled matrix-vector multiplication after that. We can do this scaling of $y$ inside the operator, but I do not see a real reason to do that inside the operator and not just call `assign` beforehand.
MR !703 proposes an implementation that enables scaled operators
```math
y \gets \alpha A x + \gamma y
```
that is in my opinion not optimal since $\alpha$ becomes part of the operator's state.
To resolve the "state"-issue mentioned above, I propose to actually extend the `apply` routine by a parameter $\alpha$ instead of changing the operator's state:
```
virtual void apply( const ValueType& alpha // new parameter
const SourceFunction& src,
const DestinationFunction& dst,
size_t level,
DoFType flag,
UpdateType updateType = Replace ) const
```
Refactoring this requires some work, but should hopefully be doable. I'd keep the current apply, and just make it call the new one with $\alpha = 1$.
-----
EDIT: We basically already do this:
> We can do this scaling of $y$ inside the operator, but I do not see a real reason to do that inside the operator and not just call `assign` beforehand.
if `UpdateType == Replace` since we [interpolate `0`](https://i10git.cs.fau.de/hyteg/hyteg/-/blob/0a8460a3dd538d9d91a194b5d5588c1a4ef2b1ac/src/hyteg/elementwiseoperators/P1ElementwiseOperator.cpp#L96).
Therefore, for completeness I'll also propose an interface for the `gemv`:
```
virtual void gemv( const ValueType& alpha // new parameter
const SourceFunction& src,
const ValueType& beta // new parameter
const DestinationFunction& dst,
size_t level,
DoFType flag ) const
```
with the perk that it gets rid of the `UpdateType` parameter.Nils KohlNils Kohlhttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/252Elementwise operator applied to parts of domain2024-03-11T07:16:38+01:00Maximilian DechantElementwise operator applied to parts of domainIn some cases it would be useful to apply an operator to only some part of the domain.
In particular, I need to apply a P1ElementwiseOperator to only a few faces that satisfy a certain condition. This is currently done by using a modifi...In some cases it would be useful to apply an operator to only some part of the domain.
In particular, I need to apply a P1ElementwiseOperator to only a few faces that satisfy a certain condition. This is currently done by using a modified apply function that takes such a condition as an argument and then checks the condition for every face.
Maybe this is useful for others as well in which case it could be implemented as a new feature or maybe there is already a method to do this which I am unaware of.
EDIT: A first simple version of the P1ElementwiseOperator that is only applied to some faces can be seen on the branch „dechant+wagneran/LaplaceBeltrami2“https://i10git.cs.fau.de/hyteg/hyteg/-/issues/250Generate missing pieces for all relevant Stokes preconditioners2024-03-06T08:38:03+01:00Nils KohlGenerate missing pieces for all relevant Stokes preconditionersEmerged as a remainder from #246.
Mainly we need the diagonal operators and some lumping as well as [some smoke tests](https://en.wikipedia.org/wiki/Smoke_testing_(software)) in HyTeG.Emerged as a remainder from #246.
Mainly we need the diagonal operators and some lumping as well as [some smoke tests](https://en.wikipedia.org/wiki/Smoke_testing_(software)) in HyTeG.Nils KohlNils Kohlhttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/249Gradient of FE functions2024-03-11T11:26:50+01:00Ponsuganth Ilangovan Ponkumar IlangoGradient of FE functionsThis issue was to discuss if there are already some implementation to evaluate the gradient of an FE function, one related function I could find was [evaluateGradient](https://i10git.cs.fau.de/hyteg/hyteg/-/blob/master/src/hyteg/p2functi...This issue was to discuss if there are already some implementation to evaluate the gradient of an FE function, one related function I could find was [evaluateGradient](https://i10git.cs.fau.de/hyteg/hyteg/-/blob/master/src/hyteg/p2functionspace/P2Function.hpp#L136) but this is only for 2D and I am not sure if it is tested well also.
For example, if we could evaluate the gradient of a scalar FE function and get the result as a vector FE function. There would be multiple issues to deal with like, what should we do on (faces, ) edges and nodes and how efficiently we perform the evaluation.
I feel this would be useful when we go towards nonlinear rheologies (viscosity) and also maybe for some post-processing in geophysics, like studying the stress field from the velocity field.Ponsuganth Ilangovan Ponkumar IlangoPonsuganth Ilangovan Ponkumar Ilangohttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/247Add template viscosity profiles as input data for convection simulations2024-03-18T11:24:26+01:00Eugenio D'AscoliAdd template viscosity profiles as input data for convection simulationsFor convection simulations with non-constant viscosities (in radial direction or temperature dependent) a viscosity profile must be read in and stored in the corresponding data container.
Currently, no "standard" template viscosity prof...For convection simulations with non-constant viscosities (in radial direction or temperature dependent) a viscosity profile must be read in and stored in the corresponding data container.
Currently, no "standard" template viscosity profiles are available.
The idea is to provide a set of template viscosity profiles for simulation runs. These template profiles could be stored under ```hyteg/data/terraneo/viscosityProfiles```.
@HamishBrown @IsabelPapanagnou @kohlHamish BrownHamish Brownhttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/246New generated Stokes operators2024-03-06T08:43:00+01:00Nils KohlNew generated Stokes operatorsThis issue shall document and **discuss** the introduction of the Stokes composite operators that are built from the new generated operators.
Also **point out if I made a mistake** and feel free to correct this description.
## List of S...This issue shall document and **discuss** the introduction of the Stokes composite operators that are built from the new generated operators.
Also **point out if I made a mistake** and feel free to correct this description.
## List of Stokes operators
It seems(?) that we are particularly interested in three types of Stokes operators, particularly differing in the momentum terms.
Let
```math
K = \begin{bmatrix}
A & B^T \\
B & 0
\end{bmatrix}
```
be the discrete Stokes operator for a $\mathbb{P}_2-\mathbb{P}_1$ Taylor-Hood discretization.
The operators differ in the way the $A$ block is defined. Let's summarize.
| Proposed name (prefix) | Strong form of momentum term ("velocity part") | Weak form that defines $A$ |
| ------ | ------ | ------ |
| `P2P1StokesConstant` | $- \Delta u$ | $\displaystyle\int_\Omega \nabla u : \nabla v$ |
| `P2P1StokesEpsilon` | $\displaystyle- \nabla \cdot \Big(2 \mu \varepsilon(u)\Big)$ | $\displaystyle\int_\Omega 2 \mu \Big(\varepsilon(u) : \varepsilon(v)\Big)$ |
| `P2P1StokesFull` | $\displaystyle - \nabla \cdot \Big( 2 \mu \varepsilon(u) \Big) - \frac{2}{\mathrm{dim}} \nabla (\nabla \cdot u)$ | $\displaystyle\int_\Omega 2 \mu \Big(\varepsilon(u) : \varepsilon(v)\Big) - \frac{2}{\mathrm{dim}} (\nabla \cdot u) \cdot (\nabla \cdot v)$
where $\varepsilon(w) = \frac{1}{2} (\nabla w + (\nabla w)^T)$.
$\mu = \mu(x)$ will eventually be a scalar finite element function (in $\mathbb{P}_2$) that is supplied by the user.
## Possible file structure
I propose the following structure - **open to suggestions**.
```
hyteg/src/
- hyteg/
- terraneo/
- ...
- hyteg_operators/ # submodule with generated operators
- hyteg_operators_composites/ # new module
- stokes/
- viscousblock/
- P2ViscousBlockLaplaceOperator.*pp
- P2ViscousBlockEpsilonOperator.*pp
- P2ViscousBlockFullOperator.*pp
- divergence/
- P2VectortoP1DivergenceOperator.*pp
- gradient/
- P1toP2VectorGradientOperator.*pp
- P2P1StokesConstantOperator.*pp
- P2P1StokesEpsilonOperator.*pp
- P2P1StokesFullOperator.*pp
- ...
```
## Blending
For each relevant blending map, additional operators will be added to the respective files. For instance
```cpp
// P2P1StokesEpsilonOperator.hpp
class P2P1StokesEpsilonOperator { ... };
class P2P1StokesEpsilonIcosahedralShellMapOperator { ... };
```
## Some notes
* I am not sure yet which quadrature rules to choose for the blending cases. For now I chose rules that integrate degree 3 polynomials exactly for blending, and degree 2 polynomials without blending (the latter should be sufficient to maintain convergence order(?)). But we may want to discuss this.
* The new composites shall _only_ use/depend on the newly generated operators and never introduce any of the existing constant stencil ops, elementwise ops etc. If something is missing, we should adapt the HFG.
* Lumped and diagonal operators are already in the making. Those are needed for all sorts of preconditioners.
* I currently only consider Taylor-Hood-style operators. Stabilized $\mathbb{P}_1-\mathbb{P}_1$ operators should be straightforward to generate, too.
* In the future, we want to enable the HFG to generate the vector-operators directly such that they do not have to be composed manually. This will likely remove the need for all operators in the subdirectories `viscousblock`, `gradient`, `divergence`, etc. But for starters, we will probably go with the composed versions.
## Checklist
- [x] generate scalar block operators
- [x] compile composites for viscous blocks (!710)
- [x] compile div and grad blocks (!710)
- [x] compile Stokes operators (!710)
- [x] tests (!710)
- [ ] compile/implement diagonal operators to use preconditioners (moved to #250)Nils KohlNils Kohlhttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/239Reducing metadata for ADIOS2 checkpointing2024-01-19T19:01:37+01:00Ponsuganth Ilangovan Ponkumar IlangoReducing metadata for ADIOS2 checkpointingCurrently with ADIOS2, for checkpointing, we write out data for all primitives separately, for this we need to create adios local variables for each primitive types. As recently discussed, this growing number of local variables puts a he...Currently with ADIOS2, for checkpointing, we write out data for all primitives separately, for this we need to create adios local variables for each primitive types. As recently discussed, this growing number of local variables puts a heavy load on the ADIOS2 metadata management and thus affecting IO performance.
The current idea is to only write out data for the highest dimensional primitive (Cell in 3D, Face in 2D), and communicate to other primitive types when reading in at a checkpoint.Ponsuganth Ilangovan Ponkumar IlangoPonsuganth Ilangovan Ponkumar Ilangohttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/233Sparse matrix wrapper for Eigen2023-11-09T10:09:19+01:00Nils KohlSparse matrix wrapper for EigenSince we ship Eigen anyway, it would be great if we had a sparse matrix wrapper for their linear solvers (thanks @he66coqe for suggesting this!). This would also yield access to sparse _direct_ solvers without the need to install PETSc. ...Since we ship Eigen anyway, it would be great if we had a sparse matrix wrapper for their linear solvers (thanks @he66coqe for suggesting this!). This would also yield access to sparse _direct_ solvers without the need to install PETSc. Clearly, the Eigen implementations are not well-suited for (massively) parallel computations.https://i10git.cs.fau.de/hyteg/hyteg/-/issues/232Generic surface integral implementation2023-10-31T10:49:53+01:00Ponsuganth Ilangovan Ponkumar IlangoGeneric surface integral implementationI believe we currently don't have the functionality to perform generic surface integrals on the domain, the idea is to mark the boundaries of a macro primitive and perform surface integrals if needed on them. The marker can be the existi...I believe we currently don't have the functionality to perform generic surface integrals on the domain, the idea is to mark the boundaries of a macro primitive and perform surface integrals if needed on them. The marker can be the existing `NeumannBoundary` markers or some more generic markers for surfaces inside the domain. The forms needs to be modified a bit to accommodate the cell coordinates and the facet coordinated where the surface integral needs to be evaluated.Ponsuganth Ilangovan Ponkumar IlangoPonsuganth Ilangovan Ponkumar Ilangohttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/231ADIOS2 output for unresolved particles.2023-10-19T12:10:36+02:00Nils KohlADIOS2 output for unresolved particles.(See also discussion in https://i10git.cs.fau.de/hyteg/hyteg/-/merge_requests/653#note_25667.)
ADIOS2 output is currently not supported for the unresolved particle feature. Either this has to be implemented as part of MESA-PD or through...(See also discussion in https://i10git.cs.fau.de/hyteg/hyteg/-/merge_requests/653#note_25667.)
ADIOS2 output is currently not supported for the unresolved particle feature. Either this has to be implemented as part of MESA-PD or through the coupling on the HyTeG side. I suppose the latter is simpler, but not sure yet.https://i10git.cs.fau.de/hyteg/hyteg/-/issues/229VTK output is not possible for functions in single precision2024-01-16T13:40:33+01:00Michael ZikeliVTK output is not possible for functions in single precisionWhile writing VTK output, functions that have another type than double precision are simply skipped and only the mesh structure and the functions of double or integer type are printed. There is no user notification, that this is done, th...While writing VTK output, functions that have another type than double precision are simply skipped and only the mesh structure and the functions of double or integer type are printed. There is no user notification, that this is done, the function values are just missing in the output file.
A possible workaround is to use the `.copyfrom()` member function to create a printable copied function object of value type double, however this works only for P1 functions.
Allowing VTK output for any precision (e.g. single, half, ...) would be a nice to have.Michael ZikeliMichael Zikelihttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/219How to support Checkpoint/Restart2023-10-13T13:54:19+02:00Marcus MohrHow to support Checkpoint/RestartHi,
as the guys working on the TerraNeo app(s) have started performing longer production runs on supercomputers the question on how to support a classical checkpoint/restart mechanism is becoming more urgent. This issue is intended to d...Hi,
as the guys working on the TerraNeo app(s) have started performing longer production runs on supercomputers the question on how to support a classical checkpoint/restart mechanism is becoming more urgent. This issue is intended to discuss thoughts on how we can/want/should work on this.
Some questions that come directly into my mind are e.g.:
- What functionality already exists in waLBerla and HyTeG to support this (e.g. we already can (de)serialize primitives and attached data for migrating these via MPI)
- Do we start with supporting only a one-file-per-process approach or can we also provide some N-processes-to-M-checkpoint files setting.
- How should an app indicate what data to checkpoint; will we go for a registration approach like with the `VTKOutput`; will that data always be FE functions or do we also (need to) support other user-defined data structures?
Cheers
MarcusImplement basic Checkpointing FunctionalityMarcus MohrMarcus Mohrhttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/212Incorporation of weak Dirichlet boundary conditions in test cases2024-02-20T09:55:24+01:00Fabian BöhmIncorporation of weak Dirichlet boundary conditions in test casesCurrently, the application of weak boundary conditions from the user view in a test case is complicated:
- in HyTeG solvers, the DoFtype has to be set to 'All' via solver.setDoFType( All ); because the DoFs on the boundary are now par...Currently, the application of weak boundary conditions from the user view in a test case is complicated:
- in HyTeG solvers, the DoFtype has to be set to 'All' via solver.setDoFType( All ); because the DoFs on the boundary are now part of the solution
- the standard strong boundary application has to be switched off in PETSc solvers by solver.disableApplicationBC( true );
- the user can/has to set the penalty depending on his application and operator (ensure coercivity of the operator: e.g. larger constant required in 3D)
Two ways to avoid giving the responsibility to do these small yet crucial operations to the user are:
- Inspecting the given operator within the solver and set DoFs and boundary handling according to the operator type (if it applies its BCs weakly)
(con: it is not the solver's responsibility to check how BCs should be applied but the operator's, the solver should 'just solve')
- introducing another boundary flag WeakDirichletBC and handling the case similarly to the existing free slip BC.
(con: additional functions for the new boundary flag would have to be implemented, and the boundary has to be set with the new flag at the start of the test case
pro: IMO intuitive way to handle weak BCs, no modifications in the solvers)
Are there other/better ideas to handle this?https://i10git.cs.fau.de/hyteg/hyteg/-/issues/209P1ElementwiseOperator with weak boundary conditions2023-05-10T09:51:08+02:00Fabian BöhmP1ElementwiseOperator with weak boundary conditionsThe aim is to implement a new P1 elementwise operator which applies its boundary conditions weakly, similarly to DGOperator.
This has the following requirements:
- A P1 Diffusion form which provides integrateFacetDirichletBoundary() (th...The aim is to implement a new P1 elementwise operator which applies its boundary conditions weakly, similarly to DGOperator.
This has the following requirements:
- A P1 Diffusion form which provides integrateFacetDirichletBoundary() (this exists already and is used for the ScalarP1WithDGFormOperator https://i10git.cs.fau.de/hyteg/hyteg/-/blob/master/src/hyteg/dgfunctionspace/ScalarP1WithDGFormOperator.hpp )
integrateFacetDirichletBoundary() is used to compute the boundary integral on the left side/operator of the weak form in the elementwise loop.
- An applyDirichletBoundaryCondition() within P1Function (similar to https://i10git.cs.fau.de/hyteg/hyteg/-/blob/master/src/hyteg/dgfunctionspace/DGFunction.cpp#L440 in DGFunction) which computes the corresponding boundary integral on the rhs and contains the boundary function as integrand.
- The collection of boundary/facet information like opposite vertex coordinates and normals is done by the ElementNeighborInfo object ( https://i10git.cs.fau.de/hyteg/hyteg/-/blob/master/src/hyteg/volumedofspace/VolumeDoFIndexing.hpp#L540 ), which now has to work also with vertexDoF data structures. The infos are necessary for the computation of the boundary integral. An alternative would be another ElementNeighborInfo for the vertexDoF space.
It would be best to implement the weak application of the boundary condition as an option such that the new elementwise operator can replace the old one afterwards.https://i10git.cs.fau.de/hyteg/hyteg/-/issues/176Refactor PMVectorToPLScalar and PLScalarToPMVector operators2022-03-16T16:21:31+01:00Marcus MohrRefactor PMVectorToPLScalar and PLScalarToPMVector operatorsHi,
to represent the weak forms of gradient and divergence we have the following operators
- PMVectorToPLScalarOperator
- PLScalarToPMVectorOperator
with currently $`(M, L)\in\{1,2\}\times\{1,2\}`$ and $`L\leqslant M`$. These classes a...Hi,
to represent the weak forms of gradient and divergence we have the following operators
- PMVectorToPLScalarOperator
- PLScalarToPMVectorOperator
with currently $`(M, L)\in\{1,2\}\times\{1,2\}`$ and $`L\leqslant M`$. These classes are already templated. However, this is only w.r.t. the scalar suboperators. We still have different implementations for the different function space combinations. I suggest to
- [x] convert that into a single template class
- [x] add accessor methods to the scalar suboperators (needed e.g. to call `computeAndStoreLocalElementMatrices()` on them)
Cheers
MarcusMarcus MohrMarcus Mohrhttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/171DG implementation2022-04-11T16:48:50+02:00Nils KohlDG implementation# Planned features
* Discontinuous Galerkin implementation with arbitrary choice of (polynomial) shape functions
* arbitrary order
* p-refinement over macro-elements
* matrix-free operator evaluation
* weak form generation via HFG
* eva...# Planned features
* Discontinuous Galerkin implementation with arbitrary choice of (polynomial) shape functions
* arbitrary order
* p-refinement over macro-elements
* matrix-free operator evaluation
* weak form generation via HFG
* evaluation of functions and linear functionals for right-hand side assembly generated via HFG
* direct volume to volume communication (interface primitives not used)
* SIP for Poisson
Probably also(?)
* hp-multigrid (maybe also projection into continuous element space?)
* weakly imposed boundary conditions
* upwind operator
# Roadmap
- [x] Volume DoF data structure + DGFunction class w/ basic indexing functionality
- [x] DGOperator class, apply, toMatrix()
- [x] mass form, interpolation
- [x] linear form for RHS
- [x] Laplace form (SIP, interface integrals)
- [x] communication (PackInfo etc.), macro-interface integrals
- [x] inhom. Dirichlet BCs.
- [ ] higher order shape functions
- [ ] HFG refactoring and fixed DGForm API
- [x] 3D extensions
- [ ] p-adaptivity
# Possible optimizations / open refactoring / dev notes
- [ ] optimize evaluation by precomputing and storing trafo matrices on each macro-face / macro-cell (it's recomputed every time in the evaluation function)
- [ ] make all methods in DGBasisInfo const
- [ ] give DGForms the following virtual methods
* `isConstant()`: true if the form is constant over the macro - so optimization can be applied
* `isInlined()`: true if entire kernel is available
* `onlyInnerIntegrals()`: true if only volume integrals are non-zero (think mass matrix) - this allows to implement efficient(?) local inversion of the block-diagonal system without communication
- [ ] probably we should not double the information about the local polynomial degree in each macro in the DGFunction AND VolumeDoFFunction
- [x] optimize VTK evaluation by new DGFunction::evaluate( faceID, microIdx, faceType, ... ) method that skips looking for the micro-element - we know already which one we are targeting (_this is not only an optimization but also fixes slightly incorrect visualization_)
- [ ] what to do with the DoFType 'flags'?
- [x] implement affine element-local evaluation directly in HFG
- [ ] Optimize HFG trafo matrix computations by letting it know which element vertices belong to the interface.
This should allow for some cancellations since some of the input points are actually equal.
- [ ] There is an issue if the boundary macros are marked as 'Inner' as done for Stokes usually - since then the DGOperator 'thinks' that there must be another volume macro on the other side. Boom.
- [ ] Instead of resizing the element matrices, the HFG should rather assert that the size is already correct.Nils KohlNils Kohlhttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/164Trying to use single or mixed precision2023-04-05T11:20:38+02:00Marcus MohrTrying to use single or mixed precisionHi,
in issue #158 the conjecture was made that trying to compile with ```real_t = float``` would break many things. Guess what, that's true :wink:. Below are some of the issues I extracted from the zillions of warnings/errors:
- There ...Hi,
in issue #158 the conjecture was made that trying to compile with ```real_t = float``` would break many things. Guess what, that's true :wink:. Below are some of the issues I extracted from the zillions of warnings/errors:
- There are many warnings on narrowing conversions ```conversion to ‘walberla::real_t {aka float}’ from ‘double’ may alter its value [-Wfloat-conversion]``` as we do not consistently make use of ```real_c()``` and have many double literals in the code.
- A major problem is that all the FEniCS forms are generated for double. Thus, we cannot call ```gen.tabulate_tensor()``` and pass it an array of floats.
- But also in HyTeG itself the generated kernels seem to all use double.
- Some things also get explicitly instantiated only for double, like e.g. ```EdgeDoFPackInfo```. But I did not look deeper into this.
With respect to *mixed precision* I also attempted to instantiate a ```VectorDoFFunction<float>``` while compiling with ```real_t = double```. That at least compiles, but the linker cannot satisfy all dependencies. Likely here we would need to explicitly instantiate other stuff for ```float``` then, too. Like e.g. ```float hyteg::generateZero<float>()```.
Cheers
Marcushttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/156Building blocks for mantle convection simulations for models with variable vi...2022-02-24T11:34:00+01:00Marcus MohrBuilding blocks for mantle convection simulations for models with variable viscosity and anelastic approximationHi,
in order to run mantle convection models on a thick spherical shell with blending and variable viscosity and the generalised mass conservation equation of the (truncated) anelastic approximation some building blocks are still missin...Hi,
in order to run mantle convection models on a thick spherical shell with blending and variable viscosity and the generalised mass conservation equation of the (truncated) anelastic approximation some building blocks are still missing. Especially in such models we need the
- **Epsilon Operator** given by
```math
\mathcal{A}(\vec{u}) = \text{div}\left[\mu\left(\text{grad}(\vec{u})+ \text{grad}(\vec{u})^T\right)
\right]
```
- and the **Full Viscous Operator** given by
```math
\mathcal{A}(\vec{u}) = \text{div}\left[\mu\left(\text{grad}(\vec{u})+ \text{grad}(\vec{u})^T\right)
\right] - \frac{2}{3} \text{grad}\left(\mu\,\text{div}\vec{u}\right)
```
where $`\mu`$ is the variable kinematic viscosity.
Components that need implementing are:
1. *Epsilon operators* that make use of the already available HyTeG forms which support blending and/or a callback function for viscosity, these are
- ```p[12]_epsiloncc_*_*_affine_q2```
- ```p[12]_epsiloncc_*_*_blending_q2```
- ```p[12]_epsilonvar_*_*_affine_q2```
- ```p[12]_epsilonvar_*_*_blending_q2```
currently these forms seem not to be used anywhere. Note that also of the corresponding FEniCS forms only the 2D version ```p1_stokes_epsilon``` seems to show up in an operator, but neither the ``p2`` variant nor the 3D versions ```p[12]_stokes_epsilon_tet```.
1. Different forms for the full viscous operator
1. Operators allowing to use the forms for the full viscous operator
1. Tests for the new forms and for the new operators
Cheers
MarcusMarcus MohrMarcus Mohrhttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/151Add interpolate() method to CSFVectorClass that supports BoundaryUID2021-05-15T19:01:05+02:00Marcus MohrAdd interpolate() method to CSFVectorClass that supports BoundaryUIDIt would be convenient, if the CSFVectorClass was to provide an ```interpolate()``` method that allows passing a BoundaryUID. This should be straightforward as the **P[12]Function** classes and the underlying **[Vertex|Edge]DoFFunction**...It would be convenient, if the CSFVectorClass was to provide an ```interpolate()``` method that allows passing a BoundaryUID. This should be straightforward as the **P[12]Function** classes and the underlying **[Vertex|Edge]DoFFunction** classes already support that.Marcus MohrMarcus Mohrhttps://i10git.cs.fau.de/hyteg/hyteg/-/issues/142VectorFunction enhancements2021-04-01T15:54:36+02:00Marcus MohrVectorFunction enhancementsHi,
this issue is based on the comments for !381. That merge enhanced the implementation and functionality of the P1VectorFunction class. My comments were:
- Maybe we can move some of the methods (e.g. add, assign, dotLocal,...) upwar...Hi,
this issue is based on the comments for !381. That merge enhanced the implementation and functionality of the P1VectorFunction class. My comments were:
- Maybe we can move some of the methods (e.g. add, assign, dotLocal,...) upwards to a common base class instead of re-implementing basically the same stuff again in P2VectorFunction. But that is under discussion anyhow. Question is, if we get this to work nicely together with the templatisations.
- In that respect maybe also ```filter``` could be a free function?
- I definitely suggest to rename ```getMaxMagnitude()``` for VectorFunctions, as IMHO the name is very misleading. If I ask for the maximal magnitude of a velocity field e.g. I'd expect to get that back and not the maximal magnitude over all components.
Cheers
MarcusMarcus MohrMarcus Mohr