Zero Shot Text Classification Example

I’m trying to run the example from:

I ran it with the default model loaded with the workflow. After ~ 18 hours it was 70% complete. I have 8GB memory dedicated to KNIME, a Windows 11 system and a Intel(R) Core™ i7-10510U CPU @ 1.80GHz, 2304 Mhz, 4 Core(s), 8 Logical Processor(s) processor. Am I missing something or is this the performance I should expect?

Hi @rfeigel -

I haven’t been able to test this workflow out yet, but I’m curious to know the answer myself. I’ll tag @Redfield , the author of the workflow, to see if they can provide additional info on benchmarking.

Also, did you know we will be hosting a webinar this week with Redfiled about their new NLP nodes? More info here if you’re interested:

Hello @rfeigel

I can see you mentioned your computer configuration, so I assume you were running this workflow without GPU. The thing is running such big NN models as BERT in most cases requires having nVidia GPU.
So I believe I can only suggest you 3 things:

  • Use GPU, it is possible to create a dedicated Python environment with all the dependencies in Knime in the settings;
  • Reduce the input data set, let’s say first try to run it on 50 records;
  • Or you can try to find another “smaller” zero-shot model compatible with our nodes.

Best regards,

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.