Hello again ,
I encountered a problem using python scripting nodes:
When writing 32 Bit float-value types to the output table (That format is very common in deep learning though its “precise enough” and doesn’t eat 64 Bit per value), they will be converted to wrong values within the internal data format.
The process will not throw any errors, but the result are arbitary numbers. Note that explicitly converting it to float64 or double it seems to work as expected (Althought I did not test ranges etc).
I will attach a workflow for that: python_float.knwf (9.5 KB)