GPT4All Create Your Own LLM Vector Store

@mlauber71 I’m trying to run the example workflow but am having trouble locating the models. The Prepare GPT4All component defaults to this location but I don’t have that folder.


My ChatGPT4 seems to have installed the models here:
C:\Users\Owner\AppData\Local\nomic.ai\GPT4All
I’m lost. Any help is appreciated.

1 Like

@rfeigel this workflow is part of a workflow group which you can use and then point GPT4All to place the models there.

2 Likes

Thanks. Moved the models to the folder in your workflow. Now works. Also had to change the flow variable. Another question. Here’s a list of my ChatGPT models. Several of the newer models apparently aren’t available as gguf files. Can these be used in KNIME?

1 Like

I have no idea. GPT4All changed their model format in the past and KNIME struggled to keep up. You might just give it a try.

1 Like

Do you necessarily need to use GPT4All or are you flexibel?

I am using Ollama and am very happy with it - there are plenty of models available and what I like best is that it can flexibly load / unload different models depending on the request that comes in - e.g. if you have a WF that uses phi3 for one task and llama3.1 for a different one Ollama flexibly unloads phi3 after a response was generated and loads llama3.1 for the next query…

Installation is really simple… new Open Source Models are typically available in quantized version soon after their release… if some model you find on huggingphase is not available then you can actually “create” it using the gguf file…

1 Like

@MartinDDDD indeed an alternative is to use Ollama and access that via Python or REST-API. That way this will also work with KNIME 4.x and there is no dependence on some special nodes to keep up. But having it all configured thru nice nodes might also have its benefits.

2 Likes

I haven’t tried Ollama in Knime yet. I certainly don’t like it as a standalone as well as ChatGPT. No UI. This far into the 21st century I hate using command lines. Not intuitive. At least with the model I loaded it answers with a wide range of accuracy. Its also slow. I’ll try some different models. But so far thumbs down.
@MartinDDDD

Check out OpenWebUI - I use this for interacting with my local models. It’s open source, easy to install (I used docker) and feature rich - e.g. built-in RAG. Automatically detects which models are available to Ollama…

Alternatively LM Studio might be good - you can download models from huggingface, chat with them and spin up a local server very easily

1 Like

Thanks for your help but you’re way over my head. What’s obvious to you is a stretch for this 78 year old guy. I know nothing about docker. My ignorance is not your fault.

1 Like

78? Kudos to you for using knime and thinking about local llms!!

In that case definitely check out LM Studio… good user interface and usage is intuitive… installation via simple windows installer w/o need to worry about docker, gpu drivers etc…

3 Likes

Installed LM Studio. Works great. Thanks very much for the help.

3 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.