I use Parameter Optimisation for: 1) Probabilistic Neural Network Learner (DDA), 2) Simple Regression Tree Learner, 3) Tree Ensemble Learner (Regression), 4) Weka AdditiveRegression (3.7) and 5) Fuzzy rule nodes.
I can’t find information on the optimization boundaries and formats e.g.:

for AdditiveRegression I use “S” (Specify shrinkage rate, default = 1.0, ie. no shrinkage) and “I” (Number of iterations, default 10), but I don’t know could “S” be integer or double, where it’s boundaries, etc.?

for Probabilistic Neural Network I use “Theta Minus” (defines the upper boundary of activation for conflicting rules: default value is 0.2) and “Theta Plus” (defines the lower boundary of activation for non-conflicting rules: default value is 0.4), but I don’t know could “Theta Minus” and “Theta Plus” be more than 1?

for such questions I myself would look into the documentation of the original tool or the original paper.
In particular:

Shrinkage it typically a double between 0 and 1. The default, as the node description suggests is 1.0. Values above 1 do not make sense, as they would imply no convergence. Values below 0 do not make sense, as they would imply that you go in the opposite direction from the gradient. thee the original paper as quoted on the Weka AdditiveRegression docs: http://www-stat.stanford.edu/~jhf/ftp/stobst.ps

I tried to look into the documentations of the original tool. But some times another terminologies used in the original paper. E.g. in case of a Probabilistic Neural Network, paper you mention do not consist terms “Theta Minus” and “Theta Plus”. Thanks again for explaining where to find answer in this particular case (Fig. 2 The two thresholds used by the DDA algorithm).