About LLM Connector with GPU

Hello.

I was wondering hot to use GPU with “Local GPT4All LLM Connector”.

I’m using MacOS (Apple M1).

When I set the Device option in this node “cpu”, it used GPU resources.

And when I set “gpu”, this node was failed.

I’d appreciate it if someone could tell me how to use GPU resources in my MacOS.

Best Regards,
hhkim

If it is using GPU resources when set to CPU, although it is weird, doesn’t it work for you?

Have do admit I am not very familiar with the GPT4All nodes as I personally use Ollama (this allows me to also use the LLMs I run locally in other capacities - e.g. as endpoints for coding assistants etc.).

Ollama in general is very easy to install these days - only limitation I can see is that it is not that straight forward to use embeddings models…

So whereas I cannot really help you with the question you may want to consider Ollama as an alternative - here’s a KNIME blog that explains the process:

If embeddings indeed is an issue @roberto_cadili has an example WF that shows how to use Ollama embeddings:

And @mlauber71 has some great content on medium:

3 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.