After applying the RProp MLP Learner, I would like to know the importance of individual feature generated from Neural Network, would you please help to let me know how to do this? Any reference documentation available from the online resources?
a Neural Network does not weigh the features during creation, all features are treated equality and use to optimize the underlying optimization problem. In theory one could look into the weights as learned by the network and assign the importance of the feature based on a mean of all its weights. However, this might work for MLP with one hidden layer, but as soon as you use more, the weights will not give you insights into the feature importance any more.
I once made this workflow for measuring variable importance: https://www.knime.com/nodeguide/analytics/optimization/meassuring-variable-importance
If you exchange the decision tree learner against a MLP Learner node in the Variable Importance Metanode you can measure the importance of a single variable being left out.
Best wishes, Iris