I am trying to implement deep learning methods with Knime.
The problem is execution time.
The example is CNN for the Fashion MNIST data set with 60,000 training data.
I tried 3 methods.
(M1) Keras network node + DL Python Network Learner node (only has the model.fit function)
(M2) DL Python Network Learner node (intentionally has all the codes of model construction and fit)
(M3) Same codes as (M2) except that they are run in other Python script program.
The execution time was
(M1) 40 min per epoch
(M2) 6 min per epoch
(M3) 6 min per epoch
I tried the Columnar backend, but it makes no difference.
I think that M1 is the power of Knime that an analytic method is implemented with blocks,
such that it is easy to implement and interpret the model.
In order words, I want M1.
Is there any method to reduce the execution time of M1?
Otherwise, the longer running time is an inevitable cost for easy implementation I should accept?
No, I use CPU. However, the computing environment is the same as in (M2) and (M3).
My hope is to find a way improve the execution time using Knime Keras layer nodes in (M1) comparable to (M2) and (M3).
I do not want to get as fast results as GPU condition.
Maybe you could upload your example workflow for the M1 case so we could see your approach in more detail. In particular, it would be good to see which Keras nodes you are using, and how you are combining them with the DL Python Network Learner.
For a simple example of how you might train a CNN (in this case, on the MNIST dataset) see below. How does this WF differ from what you’re doing?