Feature Selection with Random Forest in KNIME

well you could just run it tow times. Use the first round to eliminate features and use the remaining features to do one more round of model building. One benefit could be to make the model more stable and reduce the number of variables you would have to constantly provide when bringing the model into production. But this would also depend on your data and your business case (there could be a trade off between a stable and robust model and one that would also use more exotic features to find small and maybe interesting groups).

Other than that there are several examples on the KNIME hub how to reduce features

https://hub.knime.com/knime/spaces/Examples/latest/04_Analytics/01_Preprocessing/

One big example is this workflow that would compare several techniques (it seems to have been updated since some time back I had problems running it):

https://hub.knime.com/knime/spaces/Examples/latest/04_Analytics/01_Preprocessing/02_Techniques_for_Dimensionality_Reduction/02_Techniques_for_Dimensionality_Reduction

Other things you could do would involve other techniques like PCA or elimination of highly correlated values and so on. But you specifically asked for variable importance.