I tried to reproduce the behavior to create a ticket for our developer team, but with my example data (150 rows and 1500 columns) the time difference is much small (5869s vs 4945). What kind of data are you using? Is this some dataset, which you could share with us?
In that workflow, I carry out a PCA on a data tables of 150x1500 and 122x1627.
I do this on two types of data. The first is a normally distributed fully random table. In that case I see the KNIME PCA is 20x slower than the R/Python PCAs.
Then I thought: hmm, perhaps the sluggishness is caused by the data being perfect noise and having no correlations. To check this hypothesis I also generated tables of the same size as before but generated from just 4 random variables, with a tiny bit of additional noise. In this case the 4 components should describe the data very well.
For this data, the KNIME PCA node is a lot faster (though still slower than R/Python). When I increase the standard deviation of the noise, the time the KNIME PCA node needs also goes up (quite abruptly!). Apparently, the KNIME PCA node does not handle noisy data very well.