More precision problems
When python numbers enter the derivation process of the LBM in one way or the other it is still quite problematic in terms of precision. For example, the following case could occur when passing
omega = 1.8:
>>> a = 1.8 >>> b = sp.Rational(4, 9) * a >>> b.evalf(17) 0.79999999999999993
The reason for this behaviour is that 1.8 gets converted to
sympy.Float. This class by default assumes 15 digits of precision. I do not see an easy way to increase the precision globally. Thus numerical problems enter the derivation which will not vanish anymore.
Two possible solutions:
Ban numbers normal numbers completely. What this means is that all numbers which enter any top-level function of lbmpy should be mapped to a symbol which is then used instead. In the end this mapping will be added in front of the subexpressions. This should not be too hard to achieve since steps in this direction have been made already because symbols heavily simplify the derivation process in generall.
Rewrite all numbers as rationals. For example
>>> a = 1.8 >>> b = sp.nsimplify(a) >>> c = sp.Rational(4, 9) * b >>> c.evalf(17) 0.8
This approach has the advantage that nice simplifications as above will be executed directly. With rationals, we should basically end up with integer calculations, thus precision should not play a role anymore