I trained a set of BERT models about a year ago (in December 2020) and I am only now trying to use one for prediction today. The model reads OK, and the predictor node seems to load the table of sentences (467 rows/sentences) OK, with the progress meter getting to 10%, but then (after a brief pause) it stops with this message:
ERROR BERT Predictor 0:393 Execute failed: ‘sample_weight_mode’
Traceback (most recent call last):
File “”, line 10, in
File “C:\Program Files\KNIME423\plugins\se.redfield.bert_0.0.1.v202012081121\py\BertClassifier.py”, line 155, in run_predict
model = tf.keras.models.load_model(file_store)
File “C:\Users\FirstName.LastName\anaconda3\envs\py3_knime_tf2\lib\site-packages\tensorflow\python\keras\saving\save.py”, line 190, in load_model
return saved_model_load.load(filepath, compile)
File “C:\Users\FirstName.LastName\anaconda3\envs\py3_knime_tf2\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py”, line 126, in load
File “C:\Users\FirstName.LastName\anaconda3\envs\py3_knime_tf2\lib\site-packages\tensorflow\python\keras\saving\saving_utils.py”, line 230, in compile_args_from_training_config
sample_weight_mode = training_config[‘sample_weight_mode’]
I found a similar issue discussed (KeyError: 'sample_weight_mode' · Issue #14040 · keras-team/keras · GitHub) which recommends turning off model compilation, however I don’t know how to do this in Knime, looks like it is not an option.
My Python and TensorFlow/DeepLearning environments are unchanged since a year ago (Python version 3.6.12), the only major change that I can remember is updating Knime itself (from 4.2.3 [I think] to 4.3.3) in May this year.
This failing model was created on 12 Dec 2020, so as an experiment, I tested another of the BERT models that I produced last December, from a few days earlier (7 Dec 2020) and this one DOES work OK (processing 74 sentences)! So I’m wondering why one model works and another (from a few days later) doesn’t. I tried reducing the number of rows from 467 to 74, and then to just one row, but it still fails.
I checked my conda environment in Anaconda (against the list of required versions here: BERT Text Classification for Everyone | KNIME) and the only differences are that I have slightly more recent versions of these three packages, for some reason:
numpy (I have 1.19.4, should be 1.19.1)
tokenizers (0.8.1rc1, should be 0.7.0)
tqdm (4.54.0, should be 4.48.0)
Could these make a difference, or cause such an error? I can’t remember how to revert to earlier versions in Python, but it took me days to get this close to the required versions, so I’m a bit wary of fiddling any more, especially as one of the models has worked.
Any ideas welcome! Would upgrading Knime to the latest version help?