I have just executed an example use case from the KNIME book - Codeless Deep Learing with KNIME
The case is Sentiment Analysis Training with KNIME and the accuracy result I get is 10% lower than that shown in the book. Does anyone know why that might be?
Reliability of results (replicablility) is important or at least interpretability.
I assume that we made a mistake when adapting the workflow for the book chapter, because the original workflow (Sentiment Analysis ā KNIME Hub) gives the same results as those reported in the book.
Thanks for reporting the issue. I hope we find the reason soon, and we will correct the other workflow accordingly.