Redfield BERT Nodes - How can I reuse an already fine-tuned model from Huggingface for predictions?

Hi,

I was experimenting with the Redfield BERT extension.
Is it possible to reuse a fine-tuned BERT prediction model from Huggingface directly without using the BERT Classification Learner node?
I tried to use the BERT Model Selector node (I loaded a pretrained sentiment analysis model) but couldn’t connect it directly to the BERT Predictor node.

Hi @ikonstas and welcome to the KNIME Forum!

I don’t believe you can with the BERT Selector Node, but you may be able to bring in the model with Python if you know some coding.

However, you should be able to read in a model from Huggingface with the BERT Model Selector node and connect it to the BERT Predictor node. What error were you getting when you tried to connect the two nodes?

Just as a reference I’ve attached a link to a post detailing how to read in a Huggingface model with the Redfield nodes (Importing BERT model from hugging face - #2 by Redfield) and a workflow showing how to connect the BERT Model Selector node with the BERT Predictor node (Building Sentiment Predictor - BERT – KNIME Hub). I hope some of this helps!

Cheers,
Dashiell

Hi Dahiell,

Thank you very much for your answer. I will try it with Python then and create maybe a component for this.
I am able to read a BERT model from Huggingface but I am unable to connect this node with the BERT Predictor node directly because they have different types of ports. Unless, I am doing something wrong.

Capture

Hello @ikonstas,

We developed the nodes with the idea that BERT models taken from repository (e.g. HuggingFace) should still be trained, since we use BERT model + small 3-layer classifier. This way you cannot just connect BERT Model Selector to BERT Predictor node. The only exception is the BERT Embedder node, where you can get vector representation used by this model.

This way I can suggest you 2 options:

  • Take a model for HuggingFace, train it without fine-tuning, so it will not be dramatically updated, then use Model Writer node to save this model. After that you can read this model with Model Reader node and feed it into the BERT Predictor node;
  • Use BERT Embedder node to extract the embeddings of the texts, then you are free to go to use let’s say Random Forest, or Python Snippet node, or anything else.

Please take a look at this workflow, I believe it should be a good reference for you:

Best regards,
Artem.

2 Likes

Hi Artem,

Thank you for the clarification and suggestions. This is very helpful and the reference as well.
My issue is that I don’t have an annotated dataset and I would like to reuse the pretrained BERT model together with the pretrained classification layers.
But, I can create a Python node that reuses the classification model and will create a component that I can pass the model name as parameter.

Many thanks for your help and time.

Best regards,
Ioannis

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.