Hi Victor,
let me first say that Dave (thanks for the comment) is absolutely right, the eigenvectors (columns of the spectral decomposition matrix) are the directions your original data is projected to and therefore contain the information which directions have most influence on the direction of maximal variance.
Actually these vectors describe the directions of maximal variance in your feature space, the first vector points to maximum variance, second to maximum variance orthogonal to the first and so on.
The eigenvalues describe the extend of variance so if you would try to find out which features contain most information considering all features this would be the place to look for.
Besides this, I fear that this is not what you are looking for. If I understand you correctly, you try to find out which feature combinations contain the most information with respect to the activation values.
Lets say that you have two classes of molecules - active and inactive - and you try to find out which features are the most useful to distinguish between active and inactive ones. Then LDA (Linear Discriminant Analysis) is what you are looking for.
LDA tries to project your data to one dimension (sufficient for a two class problem) such that the overall distances between members of the different classes is maximized on that axis.
So in your case you would actually need an LDA-projection for your two classes. In that case the projection matrix (here only a vector) would tell you which features are most important to distinguish the active and inactive molecules.
http://en.wikipedia.org/wiki/Linear_discriminant_analysis gives some more information on LDA and as far as I know, the paper by Fisher (see links on that page) is the original source for this kind of analysis.
We are planning to implement an LDA Node in Knime but until this is finished, you could help yourself with the R node (there is an LDA implementation in the MASS library).
I hope this helps, if not don’t hesitate to ask in more detail.
Uwe