Ollama / OpenAI - local embeddings not working

Hi @mlauber71, thanks for checking out the blog post :slight_smile:.

What you’re trying to do is unfortunately not possible using the OpenAI Authenticator node.

In Step 2 of the blog post, I also explained why:

… Next, we drag and drop the OpenAI Chat Model Connector node, which we can use to connect to Ollama’s chat, instruct and code models. We need this specific connector node (and not the OpenAI LLM Connector) because Ollama has built-in compatibility only with the OpenAI Chat Completions API, which is what the OpenAI Chat Model Connector node uses behind the hood.

If we wish to leverage Ollama’s vision and embeddings models, we can do so using the nodes of the KNIME REST Client Extension.

To connect to embedding models available via Ollama, you can use the POST Request node (Mind: make sure that the model you want to use is compatible with the task of creating embeddings).

To show how this would work, a couple of months ago I built a workflow to connect to embedding models and vision models from Ollama. Find it below:

Hope this helps :slight_smile:.

Happy KNIMEing,
Roberto

3 Likes