Dear Keerthan,
Thanks for your direct (and detailed) answer. Thanks for your comments, and special thanks for the (uphill) task you probably had while working on my KNWF.
I updated my workflow based on your suggestions (where possible, increasing the step size) â as far-reaching as I could understand the specifics of your changes. I send you (attached) my new version of that workflow.
WF_answers_to_Keerthan 1.knwf (1.9 MB)
Thou, Iâm feeling compelled to comment with you a few things I couldnât reach:
- (on your original answer) the component presents only 10 results on the Parameter Optimization Loop End â All parameters. How can I enlarge this number?
- on SVM, Iâve increased the step size, as you suggested, changing them into:

and thus I got a low âbest accuracyâ (of 0.2835) in the âBest parametersâ right-click option of the component:
Opening the Component, I also saw that almost all SVM predictions (except by 4 from 55 instances) were in the same class (= â0.1â). So, I guess the real accuracy was â0â, because some original instances belonged to that class, and this coincided with the class to which a few points were assigned to this class just by chance. What could I do now to fix that?
b) on k-NN (testing k values in the interval {2; 20], the accuracy attained was 0.273 (for both 2 and 3 nearest neighbors). After these results, I applied the âLine Plotâ node, which rendered this graph:
.
According to the Elbow Method, the k = 2 to 3 (the first âelbowâ) would be the best numbers (I chose k = 3). I also tried to test (and compare to) (almost) the same application, but with no loops, and with k = 3, which rendered me the following Confusion Matrix:

The data classification in this matrix was somewhat strange. Data seem to be fully dispersed along it, which leads to the supposition that the âright classificationsâ were generated just by chance. Does it look the same to you? In such a case, what could I do to reduce this data dispersion?
c) on NaĂŻve Bayes (NB), (almost) everything was different.
That happened when I was trying to follow your suggestions. Then, Iâve increased the step size, according to the below table:

Thus, my âbest resultsâ were remarkably different from the ones I got before:

Then, I opened the Component and selected âAll parametersâ (on the âParameter Optimization Loop Endâ, and I saw that the accuracies were equal (= 0.709) for all 10 parameters.Are the NaĂŻve Bayesâ results so different from the former ones?
d) on MLP, I applied the following (as you suggested):

And got these results (= 0,7717), which are somewhat similar to NBâs ones, but remarkably different from the remainder algorithms:

Would you mind helping me to understand such high differences?
e) on PNN (as in your original answer, using âMinimum standard deviationâ as âTheta Minusâ and âThreshold Standard Deviationâ as âTheta Plusâ) I also increased the step size, like:

And got these results:

The PNNâs accuracy (= 0.299) is somewhat close to SVMâs and to k-NNâs results, but once again very different from NBâs and MLPâs.
Can you enlighten me about what is happening (or should have happened)?
Thanks again for all your help.
B.R.,
Rogério.