Today I did some tests with the DL4J Feedforward Predictor (Classification) node. I encountered some interesting results when I stopped the time it needed to execute. The execution time for
1600 images was 2:40 minutes, for
160 images it was 0:40 minutes, and for
16 images it was 0:30 minutes.
One would guess that for 16 images it would need around 1/100 the time it needed for 1600 images but that isn’t the case. So just out of curiosity why?
generally it is hard to compare these execution times because we do not know the implementation details of the DL4J Library. For 16 images the most execution time is most likely spent on DL4J startup, loading the model into KNIME and data transfer between KNIME and DL4J. Also, there could be some optimizations which happen on the fly. Unfortunately, I can't give you a detailed explanation.
I see ... The dl4j startup has to be done every time new images are loaded. So that will not change in the near future, correct?
Correct, and there is no way to get around that.