Changing Keras deep learning network configuration within loop stops loop flow: Keras Network Learner

I have a Keras network learner within a loop. The loop cycles through various previously saved Keras networks (saved as .h5 files with different network configurations). Specifically, feeding into the Keras Network learner are values from read-in tables (also changing in within the loop). The number of “bitvector*” columns intentionally vary from the read-in tables. The Keras Network Learner node has wildcard selection (i.e. bitvector*) for the streaming columns - i.e. input inclusion - which precisely match the size of the input layer(s) for the read-in .h5 file. One would expect that the wildcard selection in the Keras Network Learner inclusion would thus accommodate the changing number of input columns. This does not seem to be the case. Any suggestions on a work-around strategy for what appears to be a bug in the Keras Network Learner node?

Note, the loop stops after each iteration, noting this warning in the logfile:
2020-02-13 18:36:11,090 : WARN : KNIME-Worker-71-Column Filter 0:318 : : Node : Keras Network Learner : 0:495 : Input deep learning network changed. Please reconfigure the node.

Manual entry into the Keras network learning node to simple change something seems to reactivate the node for execution. Of course, this defeats the purpose of having a loop if a manual entry is required.

Any ideas on how to get through this issue? I have had no issue with the same loop structure strategy which reads in other .h5 files that all have the same input/hidden/output node configurations - but I really need to explore varying network configurations in this loop.

Welcome to the KNIME forum @smuskal,

I believe your problem is not related to the include-exclude settings but rather to the spec of the in- and output layers of your network. The Keras Network Learner will require reconfiguration if the in- and output layers change, i.e. have different names, or different shapes.
The naming part is rather easy to handle, just make sure the in- and output layers of your network have the same names in your different network architectures.
The shape part can proof to be more difficult. You wrote that your inputs vary, this can be handled by using partial shapes followed by a reshape layer to get to your actual input shape for the network.
Note that, in this case, the learner can’t check if the provided input values are compatible with the shape and in case they aren’t it will crash during execution.
If you output shapes vary, you might be out of luck because at least our Keras Reshape Layer does not allow to define a partial shape.
However, it might be possible if you manipulate your networks directly in Python.

Cheers,

Adrian

1 Like

Thank you so much for taking the time to read and respond to my problem! I have been so impressed with knime, but have really struggled with this for several days.

The names of the input layer do change, but I handle this with wildcards and as you hypothesize this is not the issue.

So I am assuming per your response that the issue is the changing shape of the input layer. I would have thought the Keras Network Reader (that is connected to the Keras Network Learning node) having read-in a new .h5 network with a changed input layer shape would address this.

I am so optimistic with your response, but not quite clear how to implement your comment - [quote=“nemad, post:2, topic:21033”]
this can be handled by using partial shapes followed by a reshape layer to get to your actual input shape for the network.
[/quote]

As mentioned, in my workflow the Keras Network Reader reads in a previously saved but different configured network. Only different by the number of input nodes. So how do I configure a partial shape and reshape?

Note, I am using this strategy to systematically improve upon trained networks, so it is important that I am able to read in a previously saved .h5 that has had some degree of training. After a cycle of learning, the workflow saves over the previously read-in .h5 network:

image

My apologies for the spaghetti workflow image. I have been experimenting with the thought that maybe the order of operations was the problem - i.e. trying to make sure the network reader loads in the .h5 file before the network learner has a chance to pull in the data from the table reader. I use flow variables to control file names of data tables and the neural network’s .h5 file.