i manage to get some good accuracy result for regression neural network using the RPROP mlp node .
however when i try to transfer the configuration to DL4J (as i needed more configuration details) , the accuracy seem to fall off drastically and i am at lost what parameter am i overlooking
welcome in the Forum.
It is quiet challenging to give you a correct answer at his point. Could you please give us more detail about your model, your configurations etc. It would be better if you could share with us your workflow / dataset (without any sensitive data).
Please check this post it seems pretty similar to your problem perhaps it will be useful for you:
I have a problem with the efficiency of the DL4J algorithm for neural networks. More specifically, I want to train a neural network so that it correctly predicts the values from some dataset (so I have a regression problem). I have used as parameters the following values:
-Optimization algorithm: Stochastic Gradient Descent
-0.0001 as L2 regularization
-learning rate: 0.01
-23 epochs on a dataset containing 307 input samples
-loss function: Sum of erro…
sorry for the late reply , got flooded with work recently
the input is 40 numerical variable to predict a final numerical variable
the configuration for the
2 layer and 40 neuron
train : test : validate - 7:2:1
data set ~ 130 data point
for the dl4j configuraiton i tried
epoch : 100
batch : 100
the generic rprop mlp was actually situable for my problem but i got asked in detail why the configuration was chosen (Activation function) , which i cant answer , which is why i testing with dl4j
P.S : will upload the workflow when i am free