Dear KNIME Community,
I am trying to achieve a scientific understanding of a KNIME RProp MLP Learner calculation logic. Yes, I saw from the description that it uses backpropagation learning in Feedforward Neural Networks. This is so far understandable. However, I would like to know further, what type of an activation function is used in a backpropagation? Is it Sigmiod (logistic) or RELU (rectifier activation function)? Can you please ask this information from RProp MLP Learner Node developers.
Thank you in advance,