I would storngly recommend having a look at the slides of Dean Abott who gave a talk at this year's KNIME Summit. It contains several ideas about variable importance assesment. The slides are linked here.
I have to admit, that I've done most of my variable assesment until today directly in code and not in KNIME, and I don't know whether there are some ready-to-use example workflows out there.
You say, you want to measure which input variables affect the "quality". Do I get you right, that you want to measure the importance of your individual input features considering the classification?
Some assorted ideas:
You might want to try traning your classifier on one individual input variable alone at a time and measure the performance. The classifiers which perform best are obviously those with the "best" input variable.
KNIME's "Random Forest Learner" node provides an output table which gives you information about where in the tree an input variable occurs. The idea is, that input variables which occur on top (i.e. close to the root) of the tree, are more "important" because they make a good split for most of the data. (it's available in the second output port titled "Attribute Statistics")
The Palladian nodes have a dedicated "InformationGain" calculator node, which basically uses the same measure which is (often) used in decision trees and provide you with an output table which will give you the IG value for each input variable.
There is one example workflow given in the KNIME public server named variable importance.
after running the workflow I have this as output. i would like to know what does the exactly show. Does this mean Universe_1_3 has the maximum amount of errors in the prediction model??? And then how do we get to know which input variable has the maximum influence on the output classification??
With this small number of features (as shown in your example), I wonder whether you are really looking for variable assessment methods or whether they would even be useful. Such methods usually guide you when you do not have any backing theory / domain knowledge or when there are simply too many features to select manually from. In any other case, it is IMO better to perform the selection manually.
However, if you are rather looking for a way to assess the importance of each input on the target, while keeping the other inputs constant, then a good approach might be to use Logistic Regression Learner and to analyze the coefficients for each variable.
You could also directly estimate the target "Quality%" as a continuous variable instead of a binary variable. The natural choice would then be Linear Regression Learner.
Ok, in that case I would add that you might try dimension reduction such as PCA on your inputs with a linear regression thereafter. Another approach to explore.