I am using the loop optimization node with an SVM learner and predictor. The optimization loop asks for a fixed step size. I have a large number of optimizations to do, with a few parameters to adjust in each optimization. Finding the maximum values using a fixed step is not a good idea for me. I have looked up the possible solitions and the machine learning experts suggest the use of a logarithmic scale step size. Is it possible to do this in KNIME?
I have one more question that might be a bit naiive. The SVM leaner loop gives the option to choose the "overlapping penalty", which I could not really find the mathematical/scientific meaning of it. A brief explanation/a reference to what this value is would be very helpful.
using the optimization loop nodes this is not possible. However, you can also construct a loop your self performing the same straight forward approach as the optimization loop nodes.
Therefore you first need a list with the parameters and this you use as the configuration table with a table row to variable. Than you are using those parameters to evaluate your svm and finally collect with a loop end node the parameters and achieved accuracy.
There's always the option of using "Java Edit Variable (simple)" to apply arbitrary transformations to the linear steps generated. A "Variable Math Formula" node unfortunately doesn't exist to achieve this with simpler syntax... :-)
Thank you guys for the tips. I will try to construct my own loop as you have suggested. I will have to seek help again in case this did not work for me, if you don't mind :)
The other part of my question was regarding the overlapping penalty, which I couldn't find the proper documentation for, to understand the mathematical meaning of it. If you could point me towards the documentation/ a reference, that would be great.
The overlapping penalty is useful in the case that the input data is not separable. It determines how much penalty is assigned to each point that is misclassified. A good value for it is 1.
It's essentially a symmetric misclassification cost modifier, i.e., it penalises true positives and true negatives equally (or not at all if =0). I'd pragmatically consider it as yet another tuning parameter to cyclce through. :-)