lbmpy issueshttps://i10git.cs.fau.de/pycodegen/lbmpy/-/issues2021-02-08T20:38:55+01:00https://i10git.cs.fau.de/pycodegen/lbmpy/-/issues/16Cumulant test case2021-02-08T20:38:55+01:00Michael Kuronmkuron@icp.uni-stuttgart.deCumulant test caseThe cumulant test case had to be reduced in !61 so it runs faster. Unfortunately, it doesn't test long-term stability or physics anymore. Please extend to a proper test case.The cumulant test case had to be reduced in !61 so it runs faster. Unfortunately, it doesn't test long-term stability or physics anymore. Please extend to a proper test case.Markus HolzerMarkus Holzerhttps://i10git.cs.fau.de/pycodegen/lbmpy/-/issues/20Speed of sound in LB method2021-08-12T18:08:45+02:00Markus HolzerSpeed of sound in LB methodThe speed of sound cs is used to generate the equilibra for the LB methods, but the method itself does not know the speed of sound. Thus, in functions using the LB method it is not possible to extract the speed of sound again. This is for example the case in force models.
All LB methods should be aware of the speed of sound and should hold it as a property.The speed of sound cs is used to generate the equilibra for the LB methods, but the method itself does not know the speed of sound. Thus, in functions using the LB method it is not possible to extract the speed of sound again. This is for example the case in force models.
All LB methods should be aware of the speed of sound and should hold it as a property.Markus HolzerMarkus Holzerhttps://i10git.cs.fau.de/pycodegen/lbmpy/-/issues/21PDF Initialisation2021-09-28T15:34:21+02:00Markus HolzerPDF InitialisationAs described in section 5.5.2.1 in the book by Krüger et al. it is often important to also treat the non-equilibrium correctly when initializing the PDFs. This is not done in lbmpy so far and should be added.As described in section 5.5.2.1 in the book by Krüger et al. it is often important to also treat the non-equilibrium correctly when initializing the PDFs. This is not done in lbmpy so far and should be added.https://i10git.cs.fau.de/pycodegen/lbmpy/-/issues/24Demystify the parameters of LBMConfig2021-10-24T10:56:57+02:00Markus HolzerDemystify the parameters of LBMConfigLBMConfig defines all parameters which are used to derive the LBM collision operator. However, for new users (and even for long time users) it is often very hard to tell which parameters even make sense in certain situations.
There are some parameters that always make sense. For example the `streaming_pattern` or the `field_name`. No matter what operator is derived every streaming pattern can be applied and the name of the field is obviously independent of all other definitions.
On the other side, there are parameters that are very specific to specific methods. For example `galilean_correction` only belongs to D3Q27 cumlumat methods. Or `weighted` only belongs to orthogonal MRT methods. When a user defines combinations that do not work together two situations appear.
The first and maybe most dangerous situation is that the parameter is just ignored. For example if `Method.SRT` is used as the method and `weighted` would be set to `True` it is just ignored that a user wanted to weight something here. While in this case, it might be obvious (at least for somewhat experienced users) it is still dangerous. There are more combinations that are not so obvious. For example when `compressible` is set to `False` and cumulants are derived it is just ignored, because our cumulants are only defined for compressible LB methods or that `equilibrium_order` < 4 has no effect for cumulant methods because they are defined to be at least fourth-order accurate in the equilibrium. In this situation, one really needs to know the theory quite well to understand why the combination makes no sense. Even for advanced users, this might end up as a pitfall.
The second situation is that combinations produce errors because they do not make sense or because our derivation is not general enough. An example would be defining a fluctuating LB method for an SRT scheme. This produces `Fluctuations can only be added to weighted-orthogonal methods`. While a user has quite a good explanation of what happens here it is still a bit frustrating that such combinations are not explained at least in some documentation.
But also not all combinations produce such nice error messages. Most others just fail due to some mysterious error caused by SymPy when the derivation starts.LBMConfig defines all parameters which are used to derive the LBM collision operator. However, for new users (and even for long time users) it is often very hard to tell which parameters even make sense in certain situations.
There are some parameters that always make sense. For example the `streaming_pattern` or the `field_name`. No matter what operator is derived every streaming pattern can be applied and the name of the field is obviously independent of all other definitions.
On the other side, there are parameters that are very specific to specific methods. For example `galilean_correction` only belongs to D3Q27 cumlumat methods. Or `weighted` only belongs to orthogonal MRT methods. When a user defines combinations that do not work together two situations appear.
The first and maybe most dangerous situation is that the parameter is just ignored. For example if `Method.SRT` is used as the method and `weighted` would be set to `True` it is just ignored that a user wanted to weight something here. While in this case, it might be obvious (at least for somewhat experienced users) it is still dangerous. There are more combinations that are not so obvious. For example when `compressible` is set to `False` and cumulants are derived it is just ignored, because our cumulants are only defined for compressible LB methods or that `equilibrium_order` < 4 has no effect for cumulant methods because they are defined to be at least fourth-order accurate in the equilibrium. In this situation, one really needs to know the theory quite well to understand why the combination makes no sense. Even for advanced users, this might end up as a pitfall.
The second situation is that combinations produce errors because they do not make sense or because our derivation is not general enough. An example would be defining a fluctuating LB method for an SRT scheme. This produces `Fluctuations can only be added to weighted-orthogonal methods`. While a user has quite a good explanation of what happens here it is still a bit frustrating that such combinations are not explained at least in some documentation.
But also not all combinations produce such nice error messages. Most others just fail due to some mysterious error caused by SymPy when the derivation starts.https://i10git.cs.fau.de/pycodegen/lbmpy/-/issues/25Minimal SymPy succeeded although errors are thrown2021-10-26T11:09:23+02:00Markus HolzerMinimal SymPy succeeded although errors are thrownMarkus HolzerMarkus Holzerhttps://i10git.cs.fau.de/pycodegen/lbmpy/-/issues/26Sometimes the latest python pipeline failes2021-10-26T17:26:18+02:00Markus HolzerSometimes the latest python pipeline failesThe behaviour can be seen in this example:
https://i10git.cs.fau.de/holzer/lbmpy/-/jobs/660425The behaviour can be seen in this example:
https://i10git.cs.fau.de/holzer/lbmpy/-/jobs/660425Markus HolzerMarkus Holzerhttps://i10git.cs.fau.de/pycodegen/lbmpy/-/issues/19Update Benchmark code2021-10-28T09:06:06+02:00Markus HolzerUpdate Benchmark codeIn `lbmpy_test` benchmark code can be found to benchmark most configurations `lbmpy` is capable of. However, this is outdated and needs an updateIn `lbmpy_test` benchmark code can be found to benchmark most configurations `lbmpy` is capable of. However, this is outdated and needs an updatehttps://i10git.cs.fau.de/pycodegen/lbmpy/-/issues/28More precision problems2021-12-21T21:35:33+01:00Markus HolzerMore precision problemsWhen python numbers enter the derivation process of the LBM in one way or the other it is still quite problematic in terms of precision. For example, the following case could occur when passing `omega = 1.8`:
```python
>>> a = 1.8
>>> b = sp.Rational(4, 9) * a
>>> b.evalf(17)
0.79999999999999993
```
The reason for this behaviour is that 1.8 gets converted to `sympy.Float`. This class by default assumes 15 digits of precision. I do not see an easy way to increase the precision globally. Thus numerical problems enter the derivation which will not vanish anymore.
Two possible solutions:
1. Ban numbers normal numbers completely. What this means is that all numbers which enter any top-level function of lbmpy should be mapped to a symbol which is then used instead. In the end this mapping will be added in front of the subexpressions. This should not be too hard to achieve since steps in this direction have been made already because symbols heavily simplify the derivation process in generall.
2. Rewrite all numbers as rationals. For example
```python
>>> a = 1.8
>>> b = sp.nsimplify(a)
>>> c = sp.Rational(4, 9) * b
>>> c.evalf(17)
0.8
```
This approach has the advantage that nice simplifications as above will be executed directly. With rationals, we should basically end up with integer calculations, thus precision should not play a role anymoreWhen python numbers enter the derivation process of the LBM in one way or the other it is still quite problematic in terms of precision. For example, the following case could occur when passing `omega = 1.8`:
```python
>>> a = 1.8
>>> b = sp.Rational(4, 9) * a
>>> b.evalf(17)
0.79999999999999993
```
The reason for this behaviour is that 1.8 gets converted to `sympy.Float`. This class by default assumes 15 digits of precision. I do not see an easy way to increase the precision globally. Thus numerical problems enter the derivation which will not vanish anymore.
Two possible solutions:
1. Ban numbers normal numbers completely. What this means is that all numbers which enter any top-level function of lbmpy should be mapped to a symbol which is then used instead. In the end this mapping will be added in front of the subexpressions. This should not be too hard to achieve since steps in this direction have been made already because symbols heavily simplify the derivation process in generall.
2. Rewrite all numbers as rationals. For example
```python
>>> a = 1.8
>>> b = sp.nsimplify(a)
>>> c = sp.Rational(4, 9) * b
>>> c.evalf(17)
0.8
```
This approach has the advantage that nice simplifications as above will be executed directly. With rationals, we should basically end up with integer calculations, thus precision should not play a role anymoreMarkus HolzerMarkus Holzer