KERAS Nodes yield different result from Python Keras Model

Dear Community,

I was trying to replicate the following LSTMs example (Part 3 → Recurrent Neural Networks) with the Knime Keras nodes, yet they yield very different results, even though I use 1:1 the same settings as in the python source code.

I spent hours to find out where I went wrong, but I still have no clue. Any idea what I am doing wrong?

Keras_Test_Workflow.knwf (1.2 MB)

Thank you a lot for your help!

Hello Alec,

I checked the Learning Monitor of your learner and from the loss plot it seems that the issue might be that we don’t shuffle the training data by default.
Since 3.6 you can enable shuffling in the Options tab of the Keras Network Learner.
Especially for small tables this can increase performance drastically.

Cheers,
nemad

PS: There is also a difference in how you apply dropout compared to the python script but if Keras does what it says it does, this should be equivalent.

Thank you, nemad!
The shuffling yielded much better results.
I experimented also with the Dropout Layer: in fact, the dropout seems to work differently if applied as a layer instead being set in the LSTM dialogue. Although the results still differ, this might be simply an effect of different random seeds or something the like - as you pointed out, it is a very small dataset.
-Alec-

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.