Extracting Keras layer weights

Hi, I’ve been playing with the Keras and Tensorflow integrations to create various regression models. Is it possible to extract the training weights applied for each layer, e.g. to investigate which descriptors have a more significant contribution to the output?

Hi @obaker

I’ve never tried it myself but I don’t think that’s possible right away with the Keras nodes. What could work is to use this node: DL Python Network Editor – KNIME Hub and write a few lines of code with something like:

for layer in model.layers:
    weights = layer.get_weights()

I didn’t try myself, therefore would be happy to see if this works for you :slight_smile:

Best, Martyna

Many thanks Martyna,

I tried the following script:

for layer in input_network.layers:

weights = layer.get_weights()

print(weights)

The output was:

[array([[-0.56672734],
[ 0.70339686],
[-0.4260186 ],
[ 0.67352164],
[ 0.1962961 ],
[ 0.2167083 ]], dtype=float32), array([-0.1459628], dtype=float32)]

Which seems to be providing the weights of for each input of the output layer (the previous hidden layer has units set to 6) and the bias for the single output from what I can gather.

I then tried:

weights = input_network.layers[n].get_weights()

print(weights)

Setting n to 0 produces an empty array; setting it to 1 gets:

[array([[ 0.05995397, -0.00583802, -0.02627624, …, 0.07960415,
-0.0030279 , 0.09760958],
[ 0.05237714, 0.32031453, -0.136054 , …, -0.0160518 ,
-0.08330175, -0.26821765],
[-0.05924191, -0.06899629, -0.10395959, …, -0.11672791,
0.01968573, 0.14336964],
…,
[ 0.0607434 , -0.08397063, -0.11287836, …, 0.17125243,
-0.1131455 , -0.29807684],
[ 0.02694276, -0.02275021, 0.06482922, …, 0.00939793,
-0.13269325, -0.0043444 ],
[-0.01448575, -0.05100604, -0.04802338, …, -0.02684082,
-0.10061178, 0.02150529]], dtype=float32), array([-0.01368791, 0.03029771, -0.01598322, 0.00353007, 0.08862251,
0.02099467, -0.04430205, -0.00410087, -0.00669016, -0.01247101,
0. , -0.00355268, 0.03371308, 0.04462691, -0.00604936,
0. , 0.06118366, 0. , -0.01530698, -0.00567513,
-0.01489197, -0.00861696, -0.00748049, -0.01085868, 0.08864058,
-0.00957905, 0.00849783, 0.05578578, 0.01093601, -0.01094973,
0.07033565, 0.02749405, -0.00671388, -0.0312944 , 0.01065349,
0.0193467 , -0.01158637, 0.08465705, -0.01181047, 0.06756084,
-0.02029116, 0. , 0.03816274, 0.04647782, 0.0558208 ,
-0.01042831, 0.09514122, 0.0219849 , 0. , -0.01085258,
0.02671827, 0.00577785, -0.00738894, -0.01766891, 0.07284803,
0.03648164, -0.00934833, 0.01829179, -0.00665828, 0.04210905],
dtype=float32)]

That second array again appears to show the output weights (the first and second dense layers have units set to 60) but the first is a bit more confusing as it does not appear to be printing all elements (the shape of the input layer is 269).