Keras Network Executor outputs not equal to 100%

Hello everyone!

Recently, I was training a deep neural network and encountered a phenomenon that was rather strange.


The predicted values in each of these rows are supposed to total to 100% (or, 1.0). However, this clearly is not the case.

Below is the configuration for the Target Data used in my Keras Network Learner node.

I am not sure if the loss function, or any other parameter in the configuration could be leading to this: I am attempting to solve a binary classification problem, however I have converted my two classes into percent probabilities (i.e. 90% or 0.9 the observation is Class_1, 10% or 0.1 the observation is Class_2). The inputs are doubles as well… My approach may be the reason for this miscalculation. Does anyone have any suggestions? Thank you in advance!

Hello @JoshuaMMorriss , to help with all questions related to deep learning in KNIME, could we request the following items from you:

  1. Your KNIME logs (how to get logs is here)
  2. Current KAP version
  3. Your workflow

Thank you so much! Please be sure to tag my name using the @ sign so that I can see your reply.



Hi @JoshuaMMorriss,
It seems like nothing in your network architecture forces the network to output a valid probability distribution. During training, you feed in data with a valid probability distribution and you use an appropriate loss function. The network tries to reproduce this but there is nothing forcing it to output values that sum up to 1.
To get a valid probability distribution you can try the “softmax” activation function for the last layer. Alternatively, use the “sigmoid” activation function and only use one output neuron x, and use x as the probability of class 1 and 1-x as the probability of class 2.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.