RProp MLP Learner: what type of an activation function is used?

Dear KNIME Community,

I am trying to achieve a scientific understanding of a KNIME RProp MLP Learner calculation logic. Yes, I saw from the description that it uses backpropagation learning in Feedforward Neural Networks. This is so far understandable. However, I would like to know further, what type of an activation function is used in a backpropagation? Is it Sigmiod (logistic) or RELU (rectifier activation function)? Can you please ask this information from RProp MLP Learner Node developers.

Thank you in advance,
Kind regards,
Alina

Hi @Milovanova,

The node description mentions a paper, that contains further details of the process:

For further details see: Riedmiller, M. Braun, H. : “A direct adaptive method for faster backpropagation learning: theRPROP algorithm”,Proceedings of the IEEE International Conference on Neural Networks (ICNN) (Vol. 16, pp. 586-591). Piscataway, NJ: IEEE.

best,
Gabriel

1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.