Open AI connector - sort off - info

Hi.
In our company we are deploying out own local LLM. The team has created the access to be similar to using the OpenAI access, in that to use the LLM we need to both add the URL and the API key.

Please can i just double check based on a previous post i found, that for the API key I just need to add that into the Credentials Configuration node “Password” (all the other fields can be ignored) and change the URL to our local LLM in the advanced settings of the OpenAI Authenticator node? After that I am not sure if the Chat and LLM connectors would find our LLM model name.

Any clarification if someone has knowledge would be appreciated. thanks. Mark

Hey there,

I take:

  • you have an endpoint where a llm is deployed
  • it is compatible with OpenAI API
  • you are unsure whether it is a Chat Model (Chat Model Prompter) or Instruct Model (LLM Prompter)

Regarding API key: you can use credential widget / configuration and set the API Key using password field as you correctly outlined. You can then pass the created credentials variable to OpenAI Authenticator Node and select it. There you can also set your base Url under advanced options.

The in order to work out if you have an instruct or chat model you can simply try the following:

  • grab both OpenAI Chat Model Connector and OpenAI LLM Connector node
  • create via string widget a variable that holds the model name (e.g. llama3.1) - set both Connector nodes to “All Models” and then set the model name via the created variable
  • then connect both Connector nodes to Chat Model Prompter and LLM Prompter respectively and pass in some data (Prompt column for LLM Prompter, Role column and Message column for Chat Model Prompter) and then run both of them

Your set up should look something like this:
image

Set up connector nodes like this:

1 Like

@MartinDDDD
Sorry for the delay but getting the Installation on the closed works PC is not an easy task. I am not as comfortable with Variables as I would like so can i check what I ought to see when I add the details. Sorry I can not do screen shots from the PC - blocked

In the String Widget I have added the local model name into the Default Value and when checking the String Widget Flow Variable Outputs I see Name = String-input and Value = The Name of the Model", So this appears to be correct.

For the Credentials Widget I have added the API key to the “Password” field and left all other fields alone. When i then execture an check the Variable Outputs I see Name = credentials-input and Value = Credentials: credentials-input

In both of the Nodes in the Variables Tab should i expect to see these values reflected here ? or is the variable passed without this tab needing inputs.

For instance when I then try to configure the OpenAI Authenticator node I can choose the “credentials-input” for the API Key and I have added the local LLM URL in the advanced section.

A bit of a ramble but I hope this info makes sense .

1 Like

From what you are writing your set up sounds correct.

Does it not work? :slight_smile:

With regards to the base URL you are configuring make sure that you take the “/v1” equivalent to the OpenAI API .

Let me take some screenshots for you :-).

My credential variable is named API_KEY and I am using Ollama OpenAI compatible endpoint - here is my OpenAi Authenticator config:

image

In Chat Model Connector I pass a variable modelName:

image

When in executed state you can then also see which model is passed - qwen2.5:latest in my case:

image

1 Like

@MartinDDDD
I checked as per your examples - For the OpenAI Chat Model, all models - choose modelName I am presented with the name of the model however it is in red and with (missing) in front of the model name.

BUT I think the main issue is that for the OpenAI Authenticator it is failing due to Proyx and Cert problems. Unfortunately although I have the extra info needed I can not see a way to add the details into the node

Hmm this points towards the variable missing from the input stream. Can you click the node “before” the Chat Model Connector and check if this variable is visible?

If you go back to my “general setup screenshot” above, you can see how there should be a string widget / or string configuration node (red circle as output port) be connected to the ChatModel Connector or any node prior in the same branch, which passes the flow variable in.

If it now looks red it points towards the variable having been removed or renamed…

On the other config re Cert and Proxy I’m afraid I won’t be able to help…

What you can do is use Ollama in KNIME Python nodes and set a proxy at the start of the script:

set HTTP_PROXY=http://proxy.my-company.com:8080
set HTTPS_PROXY=http://proxy.my-company.com:8080

You can find examples of that here:

2 Likes

@mlauber71 - As always thanks for the links.
My poor litle PM brain imploded slighlty :slight_smile: with the overall Workflow so it will take me some time to work my way through it to find the nuggets of gold.

@MartinDDDD The String Widget has the model Name in the Default Value dialogue area and when the view is opened it shows the model name, however when configuring the Chat Model node I can select the ModelName but it still shows as red “missing”. I will delete all and try again.

@mgirdwood maybe start with this workflow inspired by @MartinDDDD. It uses Ollama and GPT4All and just some new KNIME GenAI nodes which should work also on a local machine.

1 Like

Again thanks for the link, I will look at it on the home PC to see what I can pick up. unfortunately not an option on the works machines as installing Ollama will not be possible, hence the attempt to get the LLM connection using our internal connections. Seems that without a means to input the proxies into the workflow as part of the configuration it will not be possible.

I wonder if its possible to use a python script with the proxy and api details as the input to the OpenAI authenticator?

@mgirdwood in theory Ollama should ‘respect’ a proxy setting when you sort it in the terminal window. I have not tested it if this setting will then also work with using the KNIME OpenAI node to connect to the local Ollama URL.

As I said: one approach was to do it all in the Python node and set the proxies there.