@mlauber71 I’m trying to run the example workflow but am having trouble locating the models. The Prepare GPT4All component defaults to this location but I don’t have that folder.
Thanks. Moved the models to the folder in your workflow. Now works. Also had to change the flow variable. Another question. Here’s a list of my ChatGPT models. Several of the newer models apparently aren’t available as gguf files. Can these be used in KNIME?
Do you necessarily need to use GPT4All or are you flexibel?
I am using Ollama and am very happy with it - there are plenty of models available and what I like best is that it can flexibly load / unload different models depending on the request that comes in - e.g. if you have a WF that uses phi3 for one task and llama3.1 for a different one Ollama flexibly unloads phi3 after a response was generated and loads llama3.1 for the next query…
Installation is really simple… new Open Source Models are typically available in quantized version soon after their release… if some model you find on huggingphase is not available then you can actually “create” it using the gguf file…
@MartinDDDD indeed an alternative is to use Ollama and access that via Python or REST-API. That way this will also work with KNIME 4.x and there is no dependence on some special nodes to keep up. But having it all configured thru nice nodes might also have its benefits.
I haven’t tried Ollama in Knime yet. I certainly don’t like it as a standalone as well as ChatGPT. No UI. This far into the 21st century I hate using command lines. Not intuitive. At least with the model I loaded it answers with a wide range of accuracy. Its also slow. I’ll try some different models. But so far thumbs down. @MartinDDDD
Check out OpenWebUI - I use this for interacting with my local models. It’s open source, easy to install (I used docker) and feature rich - e.g. built-in RAG. Automatically detects which models are available to Ollama…
Alternatively LM Studio might be good - you can download models from huggingface, chat with them and spin up a local server very easily
Thanks for your help but you’re way over my head. What’s obvious to you is a stretch for this 78 year old guy. I know nothing about docker. My ignorance is not your fault.
78? Kudos to you for using knime and thinking about local llms!!
In that case definitely check out LM Studio… good user interface and usage is intuitive… installation via simple windows installer w/o need to worry about docker, gpu drivers etc…